Science.gov

Sample records for estimated systematic error

  1. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also be used in cases where the experimental uncertainties are not Gaussian, and for other purposes than to compute the likelihood, e.g., to produce random experimental data sets for a more direct use in ND evaluation.

  2. Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters

    E-print Network

    Köhlinger, F; Eriksen, M

    2015-01-01

    Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is signifi...

  3. A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation

    PubMed Central

    Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan

    2011-01-01

    The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10?5 pixels. PMID:22164021

  4. Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Köhlinger, F.; Hoekstra, H.; Eriksen, M.

    2015-11-01

    Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.

  5. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    NASA Astrophysics Data System (ADS)

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphaël; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngolé Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stéphane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-07-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ˜1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  6. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  7. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGESBeta

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore »a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  8. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 4, OCTOBER 2006 541 Estimation of Systematic Errors of MODIS

    E-print Network

    Liang, Shunlin

    leak, and other factors may cause new biases. Detection of the systematic errors in different channels are with the State Key Laboratory of Resources and Environmental Information System, Institute of Geographical

  9. Estimation of systematic errors in UHE CR energy reconstruction for ANITA-3 experiment

    NASA Astrophysics Data System (ADS)

    Bugaev, Viatcheslav; Rauch, Brian; Binns, Robert; Israel, Martin; Belov, Konstantin; Wissel, Stephanie; Romero-Wolf, Andres

    2013-04-01

    The third mission of the balloon-borne ANtarctic Impulsive Transient Antenna (ANITA-3) scheduled for December 2013 will be optimized for the measurement of impulsive radio signals from Ultra-High Energy Cosmic Rays (UHE CR), i.e. charged particles with energies above 10^19 eV, in addition to the neutrinos ANITA was originally designed for. The event reconstruction algorithm for UHE CR relies on the detection of radio emissions in the frequency range 200-1200 MHz (RF) produced by the charged component of Extensive Air Showers initiated by these particles. The UHE CR energy reconstruction method for ANITA is subject to systematic uncertainties introduced by models used in Monte Carlo simulations of RF. The presented study is aimed at evaluating these systematic uncertainties by comparing outputs of two RF simulation codes, CoREAS and ZHAireS, for different event statistics and propagating the differences in the outputs through the energy reconstruction method.

  10. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model.

    PubMed

    Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

    2013-11-01

    The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles. PMID:24258878

  11. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    SciTech Connect

    Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R.

    2009-05-15

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

  12. Suppressing systematic control errors to high orders

    NASA Astrophysics Data System (ADS)

    Bažant, P.; Frydrych, H.; Alber, G.; Jex, I.

    2015-08-01

    Dynamical decoupling is a powerful method for protecting quantum information against unwanted interactions with the help of open-loop control pulses. Realistic control pulses are not ideal and may introduce additional systematic errors. We introduce a class of self-stabilizing pulse sequences capable of suppressing such systematic control errors efficiently in qubit systems. Embedding already known decoupling sequences into these self-stabilizing sequences offers powerful means to achieve robustness against unwanted external perturbations and systematic control errors. As these self-stabilizing sequences are based on single-qubit operations, they offer interesting perspectives for future applications in quantum information processing.

  13. Resolving systematic errors in estimates of net ecosystem exchange of CO2 and ecosystem respiration in a tropical

    E-print Network

    Hutyra, Lucy R.

    7 November 2007 Accepted 10 March 2008 Keywords: Carbon Eddy correlation LBA Respiration Amazon Tropical rainforest a b s t r a c t The controls on uptake and release of CO2 by tropical rainforests errors. This study uses the compre- hensive data from our study site in an old-growth tropical rainforest

  14. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  15. Treatment of systematic errors in land data assimilation systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of lan...

  16. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  17. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  18. Reducing systematic error in weak lensing cluster surveys

    SciTech Connect

    Utsumi, Yousuke; Miyazaki, Satoshi; Hamana, Takashi; Geller, Margaret J.; Kurtz, Michael J.; Fabricant, Daniel G.; Dell'Antonio, Ian P.; Oguri, Masamune

    2014-05-10

    Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing ?-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ?3 deg{sup 2}. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg{sup 2}, where we expect ?2000 peaks based on our Subaru fields.

  19. Medication Errors in the Southeast Asian Countries: A Systematic Review

    PubMed Central

    Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui

    2015-01-01

    Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679

  20. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  1. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  2. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  3. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  4. Improved Systematic Pointing Error Model for the DSN Antennas

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  5. Strategies for minimizing the impact of systematic errors on land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation concerns itself primarily with the impact of random stochastic errors on state estimation. However, the developers of land data assimilation systems are commonly faced with systematic errors arising from both the parameterization of a land surface model and the need to pre-process ...

  6. Estimating Bias Errors in the GPCP Monthly Precipitation Product

    NASA Astrophysics Data System (ADS)

    Sapiano, M. R.; Adler, R. F.; Gu, G.; Huffman, G. J.

    2012-12-01

    Climatological data records are important to understand global and regional variations and trends. The Global Precipitation Climatology Project (GPCP) record of monthly, globally complete precipitation analyses stretches back to 1979 and is based on a merger of both satellite and surface gauge records. It is a heavily used data record—cited in over 1500 journal papers. It is important that these types of data records also include information about the uncertainty of the presented estimates. Indeed the GPCP monthly analysis already includes estimates of the random error, due to algorithm and sampling random error, associated with each gridded, monthly value (Huffman, 1997). It is also important to include estimates of bias error, i.e., the uncertainty of the monthly value (or climatology) in terms of its absolute value. Results are presented based on a procedure (Adler et al., 2012) to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources and merged products. The GPCP monthly product is used as a base precipitation estimate, with other input products included when they are within ±50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation (?) of the included products is then taken to be the estimated systematic, or bias, error. The results allow us to first examine monthly climatologies and the annual climatology producing maps of estimated bias errors, zonal mean errors and estimated errors over large areas, such as ocean and land for both the tropics and for the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where we should be more or less confident of our mean precipitation estimates. In the tropics, relative bias error estimates (?/?, where ? is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, compared to 10-15% in the western Pacific part of the ITCZ. Examining latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold season errors at high latitudes due to snow. Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, considered to be an upper bound due to lack of sign-of-the-error cancelling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of our current state of knowledge of the planet's mean precipitation. The bias uncertainty procedure is being extended so that it can be applied to individual months (e.g., January 1998) on the GPCP grid (2.5 degree latitude-longitude). Validation of the bias estimates and the monthly random error estimates will also be presented.

  7. Systematic parameter errors in inspiraling neutron star binaries.

    PubMed

    Favata, Marc

    2014-03-14

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276

  8. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  9. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  10. Error Estimates for Numerical Integration Rules

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2005-01-01

    The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

  11. UNIFORM ERROR ESTIMATES OF FINITE DIFFERENCE ...

    E-print Network

    2012-03-20

    We establish uniform error estimates of finite difference methods for the nonlinear ... Other key techniques in the analysis include the energy method, cut-off of the nonlinearity, .... element method for NLS, see [2, 20]. ... In practical computation,.

  12. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  13. MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND

    SciTech Connect

    Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  14. Models for combining random and systematic errors. assumptions and consequences for different models.

    PubMed

    Petersen, P H; Stöckl, D; Westgard, J O; Sandberg, S; Linnet, K; Thienpont, L

    2001-07-01

    A series of models for handling and combining systematic and random variations/errors are investigated in order to characterize the different models according to their purpose, their application, and discuss their flaws with regard to their assumptions. The following models are considered 1. linear model, where the random and systematic elements are combined according to a linear concept (TE = absolute value(bias) + z x sigma), where TE is total error, bias is the systematic error component, sigma is the random error component (standard deviation or coefficient of variation) and z is the probability factor; 2. squared model with two sub-models of which one is the classical statistical variance model and the other is the GUM (Guide to Uncertainty in Measurements) model for estimating uncertainty of a measurement; 3. combined model developed for the estimation of analytical quality specifications according to the clinical consequences (clinical outcome) of errors. The consequences of these models are investigated by calculation of the functions of transformation of bias into imprecision according to the assumptions and model calculations. As expected, the functions turn out to be rather different with considerable consequences for these types of transformations. It is concluded that there are at least three models for combining systematic and random variation/errors, each created for its own specific purpose, with its own assumptions and resulting in considerably different results. These models should be used according to their purposes. PMID:11522103

  15. PSF Anisotropy and Systematic Errors in Weak Lensing Surveys

    E-print Network

    Bhuvnesh Jain; Mike Jarvis; Gary Bernstein

    2005-12-23

    Given the basic parameters of a cosmic shear weak lensing survey, how well can systematic errors due to anisotropy in the point spread function (PSF) be corrected? The largest source of error in this correction to date has been the interpolation of the PSF to the locations of the galaxies. To address this error, we separate the PSF patterns into components that recur in multiple exposures/pointings and those that vary randomly between different exposures (such as those due to the atmosphere). In an earlier study we developed a principal component approach to correct the recurring PSF patterns (Jarvis and Jain 2004). In this paper we show how randomly varying PSF patterns can also be circumvented in the measurement of shear correlations. For the two-point correlation function this is done by simply using pairs of galaxy shapes measured in different exposures. Combining the two techniques allows us to tackle generic combinations of PSF anisotropy patterns. The second goal of this paper is to give a formalism for quantifying residual systematic errors due to PSF patterns. We show how the main PSF corrections improve with increasing survey area (and thus can stay below the reduced statistical errors), and we identify the residual errors which do not scale with survey area. Our formalism can be applied both to planned lensing surveys to optimize instrumental and survey parameters and to actual lensing data to quantify residual errors.

  16. UNIVERSITY OF CALIFORNIA, Systematic Error Sources in a Measurement of

    E-print Network

    Newman, Riley D.

    UNIVERSITY OF CALIFORNIA, IRVINE Systematic Error Sources in a Measurement of G using a Cryogenic INTRODUCTION 1 Background 1 Techniques used in G Measurements 4 Motivations 8 Traits of the UCI G Experiment 8 Types 23 Cryogenic Operation 23 iv #12;TABLE OF CONTENTS (CONTINUED) Page CHAPTER 3: Motivation

  17. Systematic Errors in the Hubble Constant Measurement from the Sunyaev-Zel'dovich effect

    E-print Network

    Hajime Kawahara; Tetsu Kitayama; Shin Sasaki; Yasushi Suto

    2007-10-04

    The Hubble constant estimated from the combined analysis of the Sunyaev-Zel'dovich effect and X-ray observations of galaxy clusters is systematically lower than those from other methods by 10-15 percent. We examine the origin of the systematic underestimate using an analytic model of the intracluster medium (ICM), and compare the prediction with idealistic triaxial models and with clusters extracted from cosmological hydrodynamical simulations. We identify three important sources for the systematic errors; density and temperature inhomogeneities in the ICM, departures from isothermality, and asphericity. In particular, the combination of the first two leads to the systematic underestimate of the ICM spectroscopic temperature relative to its emission-weighed one. We find that these three systematics well reproduce both the observed bias and the intrinsic dispersions of the Hubble constant estimated from the Sunyaev-Zel'dovich effect.

  18. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  19. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  20. Weak gravitational lensing systematic errors in the dark energy survey

    NASA Astrophysics Data System (ADS)

    Plazas, Andres Alejandro

    Dark energy is one of the most important unsolved problems in modern Physics, and weak gravitational lensing (WL) by mass structures along the line of sight ("cosmic shear") is a promising technique to learn more about its nature. However, WL is subject to numerous systematic errors which induce biases in measured cosmological parameters and prevent the development of its full potential. In this thesis, we advance the understanding of WL systematics in the context of the Dark Energy Survey (DES). We develop a testing suite to assess the performance of the shapelet-based DES WL measurement pipeline. We determine that the measurement bias of the parameters of our Point Spread Function (PSF) model scales as (S/N )-2, implying that a PSF S/N > 75 is needed to satisfy DES requirements. PSF anisotropy suppression also satisfies the requirements for source galaxies with S/N ? 45. For low-noise, marginally-resolved exponential galaxies, the shear calibration errors are up to about 0.06% (for shear values ? 0.075). Galaxies with S/N ? 75 present about 1% errors, sufficient for first-year DES data. However, more work is needed to satisfy full-area DES requirements, especially in the high-noise regime. We then implement tests to validate the high accuracy of the map between pixel coordinates and sky coordinates (astrometric solution), which is crucial to detect the required number of galaxies for WL in stacked images. We also study the effect of atmospheric dispersion on cosmic shear experiments such as DES and the Large Synoptic Survey Telescope (LSST) in the four griz bands. For DES (LSST), we find systematics in the g and r (g, r, and i) bands that are larger than required. We find that a simple linear correction in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r ( i) band for DES (LSST). More complex corrections will likely reduce the systematic cosmic-shear errors below statistical errors for LSST r band. However, g-band dispersion effects remain large enough for induced systematics to dominate the statistical error of both surveys, so cosmic-shear measurements should rely on the redder bands.

  1. Spatial reasoning in the treatment of systematic sensor errors

    SciTech Connect

    Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

    1988-01-01

    In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

  2. ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES

    SciTech Connect

    Dolphin, Andrew E.

    2012-05-20

    In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

  3. Tolerance for error and computational estimation ability.

    PubMed

    Hogan, Thomas P; Wyckoff, Laurie A; Krebs, Paul; Jones, William; Fitzgerald, Mark P

    2004-06-01

    Previous investigators have suggested that the personality variable tolerance for error is related to success in computational estimation. However, this suggestion has not been tested directly. This study examined the relationship between performance on a computational estimation test and scores on the NEO-Five Factor Inventory, a measure of the Big Five personality traits, including Openness, an index of tolerance for ambiguity. Other variables included SAT-I Verbal and Mathematics scores and self-rated mathematics ability. Participants were 65 college students. There was no significant relationship between the tolerance variable and computational estimation performance. There was a modest negative relationship between Agreeableness and estimation performance. The skepticism associated with the negative pole of the Agreeableness dimension may be important to pursue in further understanding of estimation ability. PMID:15362423

  4. Interpolation Error Estimates for Mean Value Coordinates

    E-print Network

    Rand, Alexander; Bajaj, Chandrajit

    2011-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, doi:10.1007/s10444-011-9218-z], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach pi.

  5. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  6. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  7. Statistical and systematic errors in redshift-space distortion measurements from large surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

    2012-12-01

    We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate ? = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ?(rp, ?) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on ? as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

  8. Analysis of Systematic Errors in the MuLan Muon Lifetime Experiment

    NASA Astrophysics Data System (ADS)

    McNabb, Ronald

    2007-04-01

    The MuLan experiment seeks to measure the muon lifetime to 1 ppm. To achieve this level of precision a multitude of systematic errors must be investigated. Analysis of the 2004 data set has been completed, resulting in a total error of 11 ppm(10 ppm statistical, 5 ppm systematic). Data obtained in 2006 are currently being analyzed with an expected statistical error of 1.3 ppm. This talk will discuss the methods used to study and reduce the systematic errors for the 2004 data set and improvements for the 2006 data set which should reduce the systematic errors even further.

  9. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  10. Estimating the error distribution function in semiparametric additive regression models

    E-print Network

    Schick, Anton

    , local polynomial smoother, orthogonal series estimator, H¨older space, test for normal errors by local polynomial smoothers, and in Model 2 we estimate 1, . . . , q by orthogonal series estimators. WeEstimating the error distribution function in semiparametric additive regression models Ursula U. M

  11. Bolstered Error Estimation Ulisses Braga-Neto a,c

    E-print Network

    Braga-Neto, Ulisses

    the bolstered error estimators proposed in this paper, as part of a larger library for classification and error of the data. It has a direct geometric interpretation and can be easily applied to any classification rule as smoothed error estimation. In some important cases, such as a linear classification rule with a Gaussian

  12. The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.

    PubMed

    Faver, John C; Yang, Wei; Merz, Kenneth M

    2012-10-01

    Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment. PMID:23413365

  13. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  14. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  15. Systematic Residual Ionospheric Error in the Radio Occultation Data

    NASA Astrophysics Data System (ADS)

    Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

    2012-04-01

    The Radio Occultation (RO) method is used to study the Earth's atmosphere in the troposphere and lower stratosphere. The path of a transmitted electromagnetic signal from a GPS satellite changes when passing through the ionosphere and neutral atmosphere. The altered signal is detected at a receiving Low Earth Orbit satellite and provides information about atmospheric parameters such as the refractivity of the Earth's atmosphere and in a further processing step, e.g., pressure or temperature. The processing of the RO data has been done at the Wegener Center for Climate and Global Change. Different corrections are applied on the data, such as a kinematic Doppler correction, induced by the moving satellites, and an ionospheric correction due to the ionosphere dispersive nature. The standard ionospheric correction enters via a series expansion, which is truncated after first order and the correction term is proportional to the inverse square of the carrier frequency. Due to this approximation we conjecture there to be still an ionospheric residual error in the RO data, which does not fully address the change of ionization in the day to night time, and at times of high and low solar activity. This residual ionospheric error is studied by analyzing the bending angle bias (and noise). It is obtained by comparing the bending angle profiles to Mass Spectrometer and Incoherent Scatter Radar (MSIS) climatology in an altitude between 65 km and 80 km. In order to detect the residual ionospheric induced error we investigate the bias over a time period from 2001 to 2010, using CHAMP and COSMIC RO data. The day to night time bias and noise are compared for different latitudinal zones. We focus on zones between 20°N to 60°N, 20°S to 20°N and 60°S to 20°S. Our analysis shows a difference between the day and night time bias. While the night time bias is roughly constant over time, the day time bias increases in the years of high solar activity, and decreases in the years of low solar activity. The aim of our analysis is to quantify this systematic residual error in order to perform an advanced ionospheric correction in the processing of the RO data.

  16. Estimation of Discretization Errors Using the Method of Nearby Problems

    E-print Network

    Roy, Chris

    discretization error estimator for steady-state Burgers's equation for a viscous shock wave at Reynolds numbers and understand numerical errors. Failure to do so can not only lead to poor engineering decisions, but can also

  17. CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

    NASA Technical Reports Server (NTRS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-01-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  18. White light phase-stepping interferometry based on insensitive algorithm to periodic systematic error

    NASA Astrophysics Data System (ADS)

    Song, Ningfang; Li, Jiao; Li, Huipeng; Luo, Xinkai

    2015-10-01

    Periodic systematic error caused by erroneous reference phase adjustments and instabilities of interferometer has a great influence on precision of measurement micro-profile using white light phase-stepping interferometry. This paper presents a five-frame algorithm that is insensitive to periodic systematic error. This algorithm attempts to eliminate the periodic systematic error when calculating the phase. Both theoretical and experimental results show that the proposed algorithm has good immunity to periodic systematic error and is able to accurately recover the 3D profile of a sample.

  19. Minor Planet Observations to Identify Reference System Systematic Errors

    NASA Astrophysics Data System (ADS)

    Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.

    2011-04-01

    In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.

  20. A Note on Confidence Interval Estimation and Margin of Error

    ERIC Educational Resources Information Center

    Gilliland, Dennis; Melfi, Vince

    2010-01-01

    Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

  1. ERROR ESTIMATES FOR APPROXIMATE SOLUTIONS FOR NONLINEAR SCALAR CONSERVATION LAWS

    E-print Network

    ERROR ESTIMATES FOR APPROXIMATE SOLUTIONS FOR NONLINEAR SCALAR CONSERVATION LAWS TAO TANG z laws. We review the recent work on error estimates for nonlinear scalar conservation laws estimates for approx­ imate solutions to scalar conservation laws. The methods of analysis include matching

  2. Systematic vertical error in UAV-derived topographic models: Origins and solutions

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart

    2014-05-01

    Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example, images collected during gently banked turns of a fixed-wing UAV or, if camera inclination can be altered, by just a few more oblique images with a rotor-based UAV. We provide practical flight plan solutions that, in the absence of control points, demonstrate a reduction in systematic DEM error by more than two orders of magnitude. DEM generation is subject to this effect whether a traditional photogrammetry or newer structure-from-motion (SfM) processing approach is used, but errors will be typically more pronounced in SfM-based DEMs, for which use of control measurements is often more limited. Although focussed on UAV surveying, our results are also relevant to ground-based image capture for SfM-based modelling.

  3. Systematic Errors in GNSS Radio Occultation Data - Part 2

    NASA Astrophysics Data System (ADS)

    Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

    2014-05-01

    The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows to compute sensitivities to changes in atmospheric composition, where we found that the effect of the CO2 increase is currently almost exactly balanced by the counteracting effect of the concurrent O2 decrease.

  4. Systematics for checking geometric errors in CNC lathes

    NASA Astrophysics Data System (ADS)

    Araújo, R. P.; Rolim, T. L.

    2015-10-01

    Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.

  5. Estimating IMU heading error from SAR images.

    SciTech Connect

    Doerry, Armin Walter

    2009-03-01

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  6. Improving SMOS retrieved salinity: characterization of systematic errors in reconstructed and modelled brightness temperature images

    NASA Astrophysics Data System (ADS)

    Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.

    2012-04-01

    The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp boundaries such as the Sky-Earth boundary. Data acquired over the Ocean rather than over Land are prefered to characterize such errors because the variability of the emissivity sensed over the oceanic domain is an order of magnitude smaller than over land. Nevertheless, characterizing such errors over the Ocean is not a trivial task. Even if the natural variability is small, it is larger than the errors to be characterized and the characterization strategy must account for it otherwise the estimated patterns will unfortunately vary significantly with the selected dataset. The communication will present results on a systematic error characterization methodology allowing stable error pattern estimates. Particular focus will be given to the critical data selection strategy and the analysis of the X- and Y-pol patterns obtained over a wide range of SMOS subdatasets. Impact of some image reconstruction options will be evaluated. It will be shown how the methodology is also an interesting tool to diagnose specific error sources. Criticality of accurate description of Faraday rotation effects will be evidenced and latest results about the possibility to infer such information from full Stokes vector will be presented.

  7. Sample covariance based estimation of Capon algorithm error probabilities

    E-print Network

    Richmond, Christ D.

    The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...

  8. Probabilistic state estimation in regimes of nonlinear error growth

    E-print Network

    Lawson, W. Gregory, 1975-

    2005-01-01

    State estimation, or data assimilation as it is often called, is a key component of numerical weather prediction (NWP). Nearly all implementable methods of state estimation suitable for NWP are forced to assume that errors ...

  9. Refined error estimates for matrix-valued radial basis functions 

    E-print Network

    Fuselier, Edward J., Jr.

    2007-09-17

    and Ward in 1994, when they constructed divergence-free vector-valued functions that interpolate data at scattered points. In 2002, Lowitzsch gave the first error estimates for divergence-free interpolants. However, these estimates are only valid when...

  10. NETRA: Interactive Display for Estimating Refractive Errors and Focal Range

    E-print Network

    Pamplona, Vitor F.

    We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive optical devices for automatic estimation of refractive correction exist, our goal is to ...

  11. Ridge Regression Estimation Approach to Measurement Error Model

    E-print Network

    Shalabh

    ) modifications of the five quasi- empirical Bayes estimators of the regression parameters of a measurement errorRidge Regression Estimation Approach to Measurement Error Model A.K.Md. Ehsanes Saleh Carleton University, Ottawa (CANADA) E-mail: esaleh@math.carleton.ca Shalabh Department of Mathematics & Statistics

  12. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  13. OPTIMAL ERROR EXPONENTS IN HIDDEN MARKOV MODELS ORDER ESTIMATION

    E-print Network

    Gassiat, Elisabeth

    OPTIMAL ERROR EXPONENTS IN HIDDEN MARKOV MODELS ORDER ESTIMATION E. GASSIAT AND S. BOUCHERON estimation, Stein's lemma, Error exponents, Large deviations, Composite hypothesis testing, Generalized and taking marginals on Y 1 (both spaces being provided with cylindrical oe­algebras). This probability

  14. Mean-square error bounds for reduced-error linear state estimators

    NASA Technical Reports Server (NTRS)

    Baram, Y.; Kalit, G.

    1987-01-01

    The mean-square error of reduced-order linear state estimators for continuous-time linear systems is investigated. Lower and upper bounds on the minimal mean-square error are presented. The bounds are readily computable at each time-point and at steady state from the solutions to the Ricatti and the Lyapunov equations. The usefulness of the error bounds for the analysis and design of reduced-order estimators is illustrated by a practical numerical example.

  15. Errors in estimation of the input signal for integrate-and-fire neuronal models

    NASA Astrophysics Data System (ADS)

    Bibbona, Enrico; Lansky, Petr; Sacerdote, Laura; Sirovich, Roberta

    2008-07-01

    Estimation of the input parameters of stochastic (leaky) integrate-and-fire neuronal models is studied. It is shown that the presence of a firing threshold brings a systematic error to the estimation procedure. Analytical formulas for the bias are given for two models, the randomized random walk and the perfect integrator. For the third model considered, the leaky integrate-and-fire model, the study is performed by using Monte Carlo simulated trajectories. The bias is compared with other errors appearing during the estimation, and it is documented that the effect of the bias has to be taken into account in experimental studies.

  16. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  17. Parameter estimation and error analysis in environmental modeling and computation

    NASA Technical Reports Server (NTRS)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  18. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ?. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and ?>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', ?'>0, and ? > 0.

  19. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  20. Quantifications of error propagation in slope-based wavefront estimations

    NASA Astrophysics Data System (ADS)

    Zou, Weiyao; Rolland, Jannick P.

    2006-10-01

    We discuss error propagation in the slope-based and the difference-based wavefront estimations. The error propagation coefficient can be expressed as a function of the eigenvalues of the wavefront-estimation-related matrices, and we establish such functions for each of the basic geometries with the serial numbering scheme with which a square sampling grid array is sequentially indexed row by row. We first show that for the wavefront estimation with the wavefront piston value determined, the odd-number grid sizes yield better error propagators than the even-number grid sizes for all geometries. We further show that for both slope-based and difference-based wavefront estimations, the Southwell geometry offers the best error propagators with the minimum-norm least-squares solutions. Noll's theoretical result, which was extensively used as a reference in the previous literature for error propagation estimates, corresponds to the Southwell geometry with an odd-number grid size. Typically the Fried geometry is not preferred in slope-based optical testing because it either allows subsize wavefront estimations within the testing domain or yields a two-rank deficient estimations matrix, which usually suffers from high error propagation and the waffle mode problem. The Southwell geometry, with an odd-number grid size if a zero point is assigned for the wavefront, is usually recommended in optical testing because it provides the lowest-error propagation for both slope-based and difference-based wavefront estimations.

  1. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  2. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  3. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  4. Approaches to relativistic positioning around Earth and error estimations

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  5. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  6. Bias in parameter estimation of form errors

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min

    2014-09-01

    The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.

  7. Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering

    E-print Network

    Kreinovich, Vladik

    Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work

  8. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  9. Radon measurements--discussion of error estimates for selected methods.

    PubMed

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface (210)Pb ((210)Po) activity measurements and uncertainties of transfer from (210)Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%. PMID:19822441

  10. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    NASA Technical Reports Server (NTRS)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  11. Color Estimation Error Trade-offs Ulrich Barnhfer*a

    E-print Network

    Wandell, Brian A.

    Color Estimation Error Trade-offs Ulrich Barnhöfer*a , Jeffrey M. DiCarloa , Ben Oldingc and Brian be transformed to calibrated (human) color representations for display or print reproduction. Errors in these color rendering transformations can arise from a variety of sources, including (a) noise

  12. Using doppler radar images to estimate aircraft navigational heading error

    DOEpatents

    Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

    2012-07-03

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  13. Error Estimates for Approximations from Control Nets

    E-print Network

    Wardetzky, Max

    estimate, rational element. Classi#12;cations: 1 Introduction Let fBi ;i 2 Ig be some collection of scalar estimates for this process were mainly given in the form kbh i 0L(b)(h i )k C(b)h2; i 2 Ih; h ! 0 (2) 0File{Bezier form on := [;#12;] 2 IR we have I := f0;1;...;ng and L(b)(x) = nX i=0 biB(n) i (x) = nX i=0 bi n i

  14. Mapping random and systematic errors of satellite-derived snow water equivalent observations in Eurasia

    E-print Network

    Walker, Jeff

    for the 1990-1991 snow season (November-April) have been examined. Dense vegetation, especially in the taiga are greatest, (in the taiga and tundra regions) are the major source of systematic error. Assumptions about how

  15. Multiple linear regression estimators with skew normal errors

    NASA Astrophysics Data System (ADS)

    Alhamide, A. A.; Ibrahim, K.; Alodat, M. T.

    2015-09-01

    The idea of skew normal distribution is suitable to be used for the analysis of data which is skewed. The purpose of this paper is to study the estimation of the regression parameters under the extended multivariate skew normal errors. The estimators for the regression parameters found based on the maximum likelihood method are derived. A simulation study is carried out to investigate the performance of the estimators derived and the standard errors associate with the respective parameters estimates are found to be quite small.

  16. A POSTERIORI ERROR ESTIMATES FOR MAXWELL EQUATIONS

    E-print Network

    Schoeberl, Joachim

    ´ement operator, Maxwell equations, edge elements. The author acknowledges the support from the Johann Radon´ed´elec finite elements. In [4], a residual type a posteriori er- ror estimator was proposed and analyzed under, their numerical treatment by finite element methods is relatively new. A reason is that they require the vector

  17. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.

  18. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.

  19. PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG

    SciTech Connect

    Mighell, Kenneth J.; Plavchan, Peter

    2013-06-15

    The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

  20. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  1. Online estimation of background and observation errors within LETKF

    NASA Astrophysics Data System (ADS)

    Li, H.; Kalnay, E.; Miyoshi, T.

    2006-12-01

    It is a common experience that OSSE experiments are more optimistic (give better forecast impacts) than real observation experiments. This is generally attributed to the fact that in OSSEs the model errors are neglected (or at least they are known). Another difference between OSSEs and real observation experiments, however, is that the observation error statistics are perfectly known in the OSSEs but not in real forecast experiments. Recent diagnostic work within 3D-Var and 4D-Var (Desroziers and Ivanov, 2001, Chapnik et al, 2004, 2006, Talagrand 1999, Cardinali et al. 2004, Rabier et al. 2002, Navascues et al. 2006 at HIRLAM, and others) suggest that innovation and other statistics can be used to tune observation and background errors. Miyoshi (2005) reported the use of the innovation statistics to estimate the background error inflation factor online within the LETKF. Although the results were satisfactory they did not take into account the fact that discrepancy between estimated and diagnosed total errors can be also due to observational errors. Here we propose to estimate observational (for each type of instrument) errors and the inflation coefficient for the background error simultaneously within the Local Ensemble Transform Kalman Filter (LETKF). Since the Kalman Filter equations are solved exactly within a local domain, it is possible to compute on the fly statistics such as the Degrees of Freedom of the Signal DFS=Trace (KH)=(I-A/B), for each type of observation which is equal to the number of model dof of each type reduced by A/B, where A, B are the analysis and background error covariance respectively (Cardinale et al, 2004). Following Miyoshi's (2005) approach, we will estimate and correct online within the LETKF the error variance of different instruments and the optimal inflation factor, using statistics of diagnosed (observed) <[y-h(xa)]'[y- h(xb)]>~Tr(R)=p x (ob error variance), and <[h(xa)-h(xb)]'[y-h(xb)] >~Tr(HBH')= p x (background error variance) to estimate and correct the variance parameters (where p is the number of observations). This can be done using a simple Kalman Filter (e.g., Kalnay, 2003, appendix C), and persistence as their forecast, or by augmenting the state vector in the LETKF. Tests with a simple (Lorenz 96) model show it is possible to recover the correct observation error statistics as well as the optimal background error inflation. Tests with a global model will be also presented.

  2. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  3. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  4. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    NASA Astrophysics Data System (ADS)

    Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

    2013-10-01

    This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

  5. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    NASA Astrophysics Data System (ADS)

    Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

    2014-04-01

    This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

  6. Difference image analysis: The interplay between the photometric scale factor and systematic photometric errors

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Bachelet, E.; Alsubai, K. A.; Mislis, D.; Parley, N.

    2015-05-01

    Context. Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA), and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.

  7. Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Sass, Daniel A.

    2010-01-01

    Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

  8. Estimating the error distribution function in nonparametric regression

    E-print Network

    Mueller, Uschi

    distribution function based on residuals from an under-smoothed local quadratic smoother for the regression classification: Primary 62G05, 62G08, 62G20 Key words and phrases: Local polynomial smoother, kernel estimatorEstimating the error distribution function in nonparametric regression Ursula U. M¨uller, Anton

  9. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  10. A Systematic Review of Software Development Cost Estimation Studies

    E-print Network

    1 A Systematic Review of Software Development Cost Estimation Studies Magne Jørgensen, Simula identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research provide recommendations for future software cost estimation research: 1) Increase the breadth

  11. Geodynamo model and error parameter estimation using geomagnetic data assimilation

    NASA Astrophysics Data System (ADS)

    Tangborn, Andrew; Kuang, Weijia

    2015-01-01

    We have developed a new geomagnetic data assimilation approach which uses the minimum variance' estimate for the analysis state, and which models both the forecast (or model output) and observation errors using an empirical approach and parameter tuning. This system is used in a series of assimilation experiments using Gauss coefficients (hereafter referred to as observational data) from the GUFM1 and CM4 field models for the years 1590-1990. We show that this assimilation system could be used to improve our knowledge of model parameters, model errors and the dynamical consistency of observation errors, by comparing forecasts of the magnetic field with the observations every 20 yr. Statistics of differences between observation and forecast (O - F) are used to determine how forecast accuracy depends on the Rayleigh number, forecast error correlation length scale and an observation error scale factor. Experiments have been carried out which demonstrate that a Rayleigh number of 30 times the critical Rayleigh number produces better geomagnetic forecasts than lower values, with an Ekman number of E = 1.25 × 10-6, which produces a modified magnetic Reynolds number within the parameter domain with an `Earth like' geodynamo. The optimal forecast error correlation length scale is found to be around 90 per cent of the thickness of the outer core, indicating a significant bias in the forecasts. Geomagnetic forecasts are also found to be highly sensitive to estimates of modelled observation errors: Errors that are too small do not lead to the gradual reduction in forecast error with time that is generally expected in a data assimilation system while observation errors that are too large lead to model divergence. Finally, we show that assimilation of L ? 3 (or large scale) gauss coefficients can help to improve forecasts of the L > 5 (smaller scale) coefficients, and that these improvements are the result of corrections to the velocity field in the geodynamo model.

  12. Verification of unfold error estimates in the unfold operator code

    NASA Astrophysics Data System (ADS)

    Fehl, D. L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.

  13. Verification of unfold error estimates in the unfold operator code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  14. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  15. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  16. Analysis of systematic error in “bead method” measurements of meteorite bulk volume and density

    NASA Astrophysics Data System (ADS)

    Macke S. J., Robert J.; Britt, Daniel T.; Consolmagno S. J., Guy J.

    2010-02-01

    The Archimedean glass bead method for determining meteorite bulk density has become widely applied. We used well characterized, zero-porosity quartz and topaz samples to determine the systematic error in the glass bead method to support bulk density measurements of meteorites for our ongoing meteorite survey. Systematic error varies according to bead size, container size and settling method, but in all cases is less than 3%, and generally less than 2%. While measurements using larger containers (above 150 cm 3) exhibit no discernible systematic error but much reduced precision, higher precision measurements with smaller containers do exhibit systematic error. For a 77 cm 3 container using 40-80 ?m diameter beads, the systematic error is effectively eliminated within measurement uncertainties when a "secured shake" settling method is employed in which the container is held securely to the shake platform during a 5 s period of vigorous shaking. For larger 700-800 ?m diameter beads using the same method, bulk volumes are uniformly overestimated by 2%. Other settling methods exhibit sample-volume-dependent biases. For all methods, reliability of measurement is severely reduced for samples below ˜5 cm 3 (10-15 g for typical meteorites), providing a lower-limit selection criterion for measurement of meteoritical samples.

  17. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  18. foraminifera, which are subject to potentially large systematic errors.

    E-print Network

    Fields, Stan

    - sumablyfrombelow2800m.Theoldestwaters found by Marchitto et al. have an age of ~4000 years. For comparison to date that the glacial ocean contained some very poorly ventilated water somewhere in its depths with the estimated trends in the production of radiocarbon by cosmic rays--a comparison that seems to demand

  19. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    SciTech Connect

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and are as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.

  20. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

    SciTech Connect

    QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-08-25

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

  1. The nature of the systematic radiometric error in the MGS TES spectra

    NASA Astrophysics Data System (ADS)

    Pankine, Alexey A.

    2015-05-01

    Several systematic radiometric errors are known to affect the data collected by the Thermal Emission Spectrometer (TES) onboard Mars Global Surveyor (MGS). The time-varying wavenumber dependent error that significantly increased in magnitude as the MGS mission progressed is discussed in detail. This error mostly affects spectra of cold (nighttime and polar caps) surfaces and atmospheric spectra in limb viewing geometry. It is proposed here that the source of the radiometric error is a periodic sampling error of the TES interferograms. A simple model of the error is developed that allows predicting its spectral shape for any viewing geometry based on the observed uncalibrated spectrum. Comparison of the radiometric errors observed in the TES spaceviews and those predicted by the model shows an excellent agreement. Spectral shapes of the errors for nadir and limb spectra are simulated based on representative TES spectra. In nighttime and limb spectra, and in spectra of cold polar regions, these radiometric errors can result in an error of ±3-5 K in the retrieved atmospheric and surface temperatures, and significant errors in retrieved opacities of atmospheric aerosols. The model of the TES radiometric error presented here can be used to improve the accuracy of the TES retrievals and increase scientific return from the MGS mission.

  2. The meaning of measurement error in parameter estimation

    NASA Astrophysics Data System (ADS)

    Freer, J.; Beven, K. J.; Choi, H. T.

    2003-04-01

    Statistical inference of parameter estimates has traditionally been based on the concept that any residual variance in fitting a calibration data set can be treated as random measurement error. This terminology, originally arising in the fitting of simple distributions, is still used today in a variety of calibration problems, despite the fact that it is all to clear that the residual variance has to do with uncertainties arising from the interaction of input data error, the use of effective parameter values and model structural error. The problem has always been that it is difficult to separate out all these different potential sources of uncertainty without making some very strong and difficult to justify assumptions about the nature of those uncertainties. It has been easier to lump the uncertainties into a modelling error and treat it as a "measurement error" of the traditional form. The problem in doing so, however, is that estimation of parameter values in this context cannot easily relax the assumption that the model is correct. Even modern reversible jump MCMC methods treat each model structure considered as if it were the correct model (even though none may, in fact, be correct. Thus, the problem of model structural error has generally been underestimated despite the fact that in many cases it is possible to reject even "optimal" models when we examine their performance in detail. Thus some innovative methods are needed to address the problem of measurement error in the face of model structural error. Here, a modification of the GLUE methodology is presented that allows measurement error to be just that!

  3. Application of variance components estimation to calibrate geoid error models.

    PubMed

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model. PMID:26306296

  4. Estimating errors in IceBridge freeboard at ICESat Scales

    NASA Astrophysics Data System (ADS)

    Prado, D. W.; Xie, H.; Ackley, S. F.; Wang, X.

    2014-12-01

    The Airborne Topographic Mapping (ATM) system flown on NASA Operation IceBridge allows for estimation of sea ice thickness from surface elevations in the Bellingshausen - Amundsen Seas. The estimation of total freeboard is based on the accuracy of local sea level estimations and the footprint size. We used the high density of ATM L1B (~1 m footprint) observations at varying spatial resolutions to assess errors associated with averaging over larger footprints and deviation of local sea level from the WGS-84 geoid over longer segment lengths The ATM data sets allow for a comparison between IceBridge (2009-2014) and ICESat(2003-2009)derived freeboards by comparing the ATM L2 (~70 m footprint) data, similar to the IceSAT footprint. While The average freeboard estimates for the L2 data in 2009 underestimate total freeboard by only 5 cm at 5 km segment lengths the error increases to 49 cm at the 50 km segment lengths typical of IceSAT analyses. Since the error in freeboard estimation greatly increases at the segment lengths used for IceSAT analyses, some caution may be required in comparing IceSAT thickness estimates with later IceBridge estimates over the same region.

  5. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  6. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation over Oceans

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, Max J.; Bacmeister, Julio T.; Chen, Baode; Takacs, Lawrence L.

    2006-01-01

    This study provides explanations for some of the experimental findings of Chao (2000) and Chao and Chen (2001) concerning the mechanisms responsible for the ITCZ in an aqua-planet model. These explanations are then applied to explain the origin of some of the systematic errors in the GCM simulation of ITCZ precipitatin over oceans. The ITCZ systematic errors are highly sensitive to model physics and by extension model horizontal resolution. The findings in this study along with those of Chao (2000) and Chao and Chen (2001, 2004) contribute to building a theoretical foundation for ITCZ study. A few possible methods of alleviating the systematic errors in the GCM simulaiton of ITCZ are discussed. This study uses a recent version of the Goddard Modeling and Assimilation Office's Goddard Earth Observing System (GEOS-5) GCM.

  7. Sources of systematic error in calibrated BOLD based mapping of baseline oxygen extraction fraction.

    PubMed

    Blockley, Nicholas P; Griffeth, Valerie E M; Stone, Alan J; Hare, Hannah V; Bulte, Daniel P

    2015-11-15

    Recently a new class of calibrated blood oxygen level dependent (BOLD) functional magnetic resonance imaging (MRI) methods were introduced to quantitatively measure the baseline oxygen extraction fraction (OEF). These methods rely on two respiratory challenges and a mathematical model of the resultant changes in the BOLD functional MRI signal to estimate the OEF. However, this mathematical model does not include all of the effects that contribute to the BOLD signal, it relies on several physiological assumptions and it may be affected by intersubject physiological variability. The aim of this study was to investigate these sources of systematic error and their effect on estimating the OEF. This was achieved through simulation using a detailed model of the BOLD signal. Large ranges for intersubject variability in baseline physiological parameters such as haematocrit and cerebral blood volume were considered. Despite this the uncertainty in the relationship between the measured BOLD signals and the OEF was relatively low. Investigations of the physiological assumptions that underlie the mathematical model revealed that OEF measurements are likely to be overestimated if oxygen metabolism changes during hypercapnia or cerebral blood flow changes under hyperoxia. Hypoxic hypoxia was predicted to result in an underestimation of the OEF, whilst anaemic hypoxia was found to have only a minimal effect. PMID:26254114

  8. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  9. Error estimation and adaptive mesh refinement for aerodynamic flows

    E-print Network

    Hartmann, Ralf

    Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held-oriented mesh refinement for single and multiple aerodynamic force coefficients as well as residual-based mesh refinement applied to various three-dimensional lam- inar and turbulent aerodynamic test cases defined

  10. Concise Formulas for the Standard Errors of Component Loading Estimates.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    2002-01-01

    Derived formulas for the asymptotic standard errors of component loading estimates to cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. Used the formulas with a real correlation matrix of 355 subjects who took 12 psychological tests. (SLD)

  11. Error Estimates for the Approximation of the Effective Hamiltonian

    SciTech Connect

    Camilli, Fabio Capuzzo Dolcetta, Italo Gomes, Diogo A.

    2008-02-15

    We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting.

  12. Error Estimates of a Combined Finite Volume --Finite Element Method

    E-print Network

    Magdeburg, Universität

    with a diffusion term. Nonlinear convective terms are approximated with the aid of a monotone finite volume scheme: nonlinear convection­diffusion equation, monotone finite volume schemes, finite element method, numericalError Estimates of a Combined Finite Volume -- Finite Element Method for Nonlinear Convection

  13. MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

    E-print Network

    Hartmann, Ralf

    MULTI­TARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN # Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coe subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic computations like compressible

  14. MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

    E-print Network

    Hartmann, Ralf

    MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coefficients including Navier-Stokes equations AMS subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic

  15. Modeling Radar Rainfall Estimation Uncertainties: Random Error Model

    E-print Network

    AghaKouchak, Amir

    Modeling Radar Rainfall Estimation Uncertainties: Random Error Model A. AghaKouchak1 ; E. Habib2 ; and A. Bárdossy3 Abstract: Precipitation is a major input in hydrological models. Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolutions. However, radar data

  16. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  17. Condition and Error Estimates in Numerical Matrix Computations

    SciTech Connect

    Konstantinov, M. M.; Petkov, P. H.

    2008-10-30

    This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

  18. Error estimates for universal back-projection-based photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pandey, Prabodh K.; Naik, Naren; Munshi, Prabhat; Pradhan, Asima

    2015-07-01

    Photo-acoustic tomography is a hybrid imaging modality that combines the advantages of optical as well as ultrasound imaging techniques to produce images with high resolution and good contrast at high penetration depths. Choice of reconstruction algorithm as well as experimental and computational parameters plays a major role in governing the accuracy of a tomographic technique. Therefore error estimates with the variation of these parameters have extreme importance. Due to the finite support, that photo-acoustic source has, the pressure signals are not band-limited, but in practice, our detection system is. Hence the reconstructed image from ideal, noiseless band-limited forward data (for future references we will call this band-limited reconstruction) is the best approximation that we have for the unknown object. In the present study, we report the error that arises in the universal back-projection (UBP) based photo-acoustic reconstruction for planer detection geometry due to sampling and filtering of forward data (pressure signals).Computational validation of the error estimates have been carried out for synthetic phantoms. Validation with noisy forward data has also been carried out, to study the effect of noise on the error estimates derived in our work. Although here we have derived the estimates for planar detection geometry, the derivations for spherical and cylindrical geometries follow accordingly.

  19. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  20. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  1. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  2. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  3. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  4. Temporal correlations of atmospheric mapping function errors in GPS estimation

    NASA Astrophysics Data System (ADS)

    Stoew, Borys; Nilsson, Tobias; Elgered, Gunnar; Jarlemark, Per O. J.

    2007-05-01

    The developments in global satellite navigation using GPS, GLONASS, and Galileo will yield more observations at various elevation angles. The inclusion of data acquired at low elevation angles allows for geometrically stronger solutions. The vertical coordinate estimate of a GPS site is one of the parameters affected by the elevation-dependent error sources, especially the atmospheric corrections, whose proper description becomes necessary. In this work, we derive time-series of normalized propagation delays in the neutral atmosphere using ray tracing of radiosonde data, and compare these to the widely used new mapping functions (NMF) and improved mapping functions (IMF). Performance analysis of mapping functions is carried out in terms of bias and uncertainty introduced in the vertical coordinate. Simulation runs show that time-correlated mapping errors introduce vertical coordinate RMS errors as large as 4 mm for an elevation cut-off angle of 5°. When simulation results are compared with a geodetic GPS solution, the variations in the vertical coordinate due to mapping errors for an elevation cut-off of 5° are similar in magnitude to those caused by all error sources combined at 15° cut-off. This is significant for the calculation of the error budget in geodetic GPS applications. The results presented here are valid for a limited area in North Europe, but the technique is applicable to any region provided that radiosonde data are available.

  5. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt ; Wetterich, Christof

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  6. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  7. Nonparametric kernel estimation of the probability density function of regression errors using estimated residuals

    E-print Network

    Samb, Rawane

    2010-01-01

    This paper deals with the nonparametric density estimation of the regression error term assuming its independence with the covariate. The difference between the feasible estimator which uses the estimated residuals and the unfeasible one using the true residuals is studied. An optimal choice of the bandwidth used to estimate the residuals is given. We also study the asymptotic normality of the feasible kernel estimator and its rate-optimality.

  8. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  9. Analysis of systematic errors of the ASM/RXTE monitor and GT-48 ?-ray telescope

    NASA Astrophysics Data System (ADS)

    Fidelis, V. V.

    2011-06-01

    The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 ?-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg ?-2, according to the Crimean version) were used and the stationary nature of its ?-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 ?-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.

  10. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  11. Systematic errors in ground heat flux estimation and their correction

    E-print Network

    Gentine, Pierre

    Incoming radiation forcing at the land surface is partitioned among the components of the surface energy balance in varying proportions depending on the time scale of the forcing. Based on a land-atmosphere analytic continuum ...

  12. Spatial Variability and Error Limits of Reference Evapotranspiration Estimates

    NASA Astrophysics Data System (ADS)

    Ley, Thomas Ward

    1995-11-01

    The overall objective of this research was to develop a methodology for assessing the spatial variability and error limits of reference evapotranspiration (ET _{rm r}) estimates from a weather station network. Likely errors introduced into ET_{rm r} estimates due to sensor measurement variability and nonstandard site conditions were investigated. Temporal and spatial correlation structures of the weather variables used to compute ET _{rm r} and of ET _{rm r} data collected over a three-year period by an operational agricultural weather station network were studied. Results indicated that ET_{ rm r} errors are minimal compared to inherent model error when sensors (of the type studied) are maintained and calibrated to operate with measurement errors that are within the limits of manufacturer's specifications of accuracy. Sensor evaluation studies showed new and recalibrated/reconditioned sensors often were operating within the limits of accuracy specifications. A methodology for adjusting maximum, minimum, and dewpoint temperature data collected at arid weather measurement sites to reflect the conditions of an irrigated measurement site was developed. Positive bias in air temperatures and negative bias in dewpoint temperatures measured at arid sites resulted in positive bias in ET_ {rm r}, as much as 17% greater (approximately 1.4 mm d^{-1}) in July and August, at some arid sites as compared to an irrigated reference site. Weather data adjustment algorithms based on daily energy and soil water balances at the dry sites were developed. These provided effective removal of bias in the dry station data and ET_{rm r} estimates. Univariate autoregressive models of daily weather parameters--maximum and minimum temperature, solar radiation, dewpoint temperature and wind speed, and daily ET _{rm r}--were developed using standard time series analysis approaches. These models can be used to forecast/estimate weather variables or ET _{rm r} at the weather station sites studied. Space-time models were developed for each variable as multivariate AR(1) processes, having lag one temporal correlation and lag zero and lag one spatial correlation. The lag zero cross correlation matrixes of standardized zero-mean, and spatially and seasonally detrended residuals of each variable were analyzed with interstation distances and with contoured isoline plots imposed on the network area. This revealed that geographically nearest neighbor stations were not always the most correlated. Interpolation of observations between and among network sites should include consideration of spatial correlation structure as well as physical proximity. The multivariate AR(1) models were used in state -space representation with the Kalman filter to determine statistically optimal estimates of the true state of a network variable at a given time. The effects of maximum and minimum weather data measurement errors found during the course of this research were evaluated in the Kalman filter. Kalman filter ET_{rm r} estimation error was equivalent to the reported error of the Penman-Wright ET_ {rm r} model. The Kalman filter was shown to be an effective spatial interpolation technique for ET_{rm r}. Using stations that were best correlated to a suppressed fictitious point interpolation site was more efficient than using stations that were geographically closest, i.e., only three best correlated stations provided the same ET_ {rm r} estimation error as was obtained using six nearest neighbors.

  13. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  14. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  15. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach ?. PMID:24027379

  16. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach ?. PMID:24027379

  17. An analysis of errors in special sensor microwave imager evaporation estimates over the global oceans

    NASA Technical Reports Server (NTRS)

    Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.

    1993-01-01

    The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.

  18. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    SciTech Connect

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

  19. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    E-print Network

    Yu-Chao Zhang; Wan-Su Bao; Xiang Wang; Xiang-Qun Fu

    2015-01-09

    This paper researches how the systematic errors in phase inversions affect the success rate and the number of iterations in optimized quantum random-walk search algorithm. Through geometric description of this algorithm, the model of the algorithm with phase errors is established and the relationship between the success rate of the algorithm, the database size, the number of iterations and the phase error is depicted. For a given sized database, we give both the maximum success rate of the algorithm and the required number of iterations when the algorithm is in the presence of phase errors. Through analysis and numerical simulations, it shows that optimized quantum random-walk search algorithm is more robust than Grover's algorithm.

  20. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  1. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    PubMed

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  2. SU-F-BRD-03: Determination of Plan Robustness for Systematic Setup Errors Using Trilinear Interpolation

    SciTech Connect

    Fix, MK; Volken, W; Frei, D; Terribilini, D; Dal Pra, A; Schmuecking, M; Manser, P

    2014-06-15

    Purpose: Treatment plan evaluations in radiotherapy are currently ignoring the dosimetric impact of setup uncertainties. The determination of the robustness for systematic errors is rather computational intensive. This work investigates interpolation schemes to quantify the robustness of treatment plans for systematic errors in terms of efficiency and accuracy. Methods: The impact of systematic errors on dose distributions for patient treatment plans is determined by using the Swiss Monte Carlo Plan (SMCP). Errors in all translational directions are considered, ranging from ?3 to +3 mm in mm steps. For each systematic error a full MC dose calculation is performed leading to 343 dose calculations, used as benchmarks. The interpolation uses only a subset of the 343 calculations, namely 9, 15 or 27, and determines all dose distributions by trilinear interpolation. This procedure is applied for a prostate and a head and neck case using Volumetric Modulated Arc Therapy with 2 arcs. The relative differences of the dose volume histograms (DVHs) of the target and the organs at risks are compared. Finally, the interpolation schemes are used to compare robustness of 4- versus 2-arcs in the head and neck treatment plan. Results: Relative local differences of the DVHs increase for decreasing number of dose calculations used in the interpolation. The mean deviations are <1%, 3.5% and 6.5% for a subset of 27, 15 and 9 used dose calculations, respectively. Thereby the dose computation times are reduced by factors of 13, 25 and 43, respectively. The comparison of the 4- versus 2-arcs plan shows a decrease in robustness; however, this is outweighed by the dosimetric improvements. Conclusion: The results of this study suggest that the use of trilinear interpolation to determine the robustness of treatment plans can remarkably reduce the number of dose calculations. This work was supported by Varian Medical Systems. This work was supported by Varian Medical Systems.

  3. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.

  4. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  5. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  6. Estimation of the error for small-sample optimal binary filter design using prior knowledge 

    E-print Network

    Sabbagh, David L

    1999-01-01

    Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...

  7. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  8. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  9. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.

  10. Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation

    SciTech Connect

    Beckerman, M.; Oblow, E.M.

    1988-04-01

    A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

  11. Verification of unfold error estimates in the UFO code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  12. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  13. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  14. Systematic error in mechanical measures of damage during four-point bending fatigue of cortical bone.

    PubMed

    Landrigan, Matthew D; Roeder, Ryan K

    2009-06-19

    Accumulation of fatigue microdamage in cortical bone specimens is commonly measured by a modulus or stiffness degradation after normalizing tissue heterogeneity by the initial modulus or stiffness of each specimen measured during a preloading step. In the first experiment, the initial specimen modulus defined using linear elastic beam theory (LEBT) was shown to be nonlinearly dependent on the preload level, which subsequently caused systematic error in the amount and rate of damage accumulation measured by the LEBT modulus degradation. Therefore, the secant modulus is recommended for measurements of the initial specimen modulus during preloading. In the second experiment, different measures of mechanical degradation were directly compared and shown to result in widely varying estimates of damage accumulation during fatigue. After loading to 400,000 cycles, the normalized LEBT modulus decreased by 26% and the creep strain ratio decreased by 58%, but the normalized secant modulus experienced no degradation and histology revealed no significant differences in microcrack density. The LEBT modulus was shown to include the combined effect of both elastic (recovered) and creep (accumulated) strain. Therefore, at minimum, both the secant modulus and creep should be measured throughout a test to most accurately indicate damage accumulation and account for different damage mechanisms. Histology revealed indentation of tissue adjacent to roller supports, with significant sub-surface damage beneath large indentations, accounting for 22% of the creep strain on average. The indentation of roller supports resulted in inflated measures of the LEBT modulus degradation and creep. The results of this study suggest that investigations of fatigue microdamage in cortical bone should avoid the use of four-point bending unless no other option is possible. PMID:19394019

  15. Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Cartwright, Keigh

    2014-10-01

    To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  16. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  17. Convergence and error estimation in free energy calculations using the weighted histogram analysis method.

    PubMed

    Zhu, Fangqiang; Hummer, Gerhard

    2012-02-01

    The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354

  18. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  19. A PRIORI ERROR ESTIMATES FOR NUMERICAL METHODS FOR SCALAR CONSERVATION LAWS.

    E-print Network

    A PRIORI ERROR ESTIMATES FOR NUMERICAL METHODS FOR SCALAR CONSERVATION LAWS. PART III is the third of a series in which a general theory of a priori error estimates for scalar conservation laws. A priori error estimates, irregular grids, monotone schemes, conservation laws, supraconvergence AMS

  20. Random and systematic measurement errors in acoustic impedance as determined by the transmission line method

    NASA Technical Reports Server (NTRS)

    Parrott, T. L.; Smith, C. D.

    1977-01-01

    The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.

  1. Removing the Noise and Systematics while Preserving the Signal - An Empirical Bayesian Approach to Kepler Light Curve Systematic Error Correction

    NASA Astrophysics Data System (ADS)

    Smith, Jeffrey C.; Stumpe, M. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

    2012-05-01

    We present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is, in turn, used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a "Bayesian Prior" and a "Bayesian Posterior" PDF (Probability Distribution Function). When maximized, the posterior PDF finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple Least Squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves versus an analytical approach, which uses a Gaussian fit to the Priors. Recent improvements to the algorithm are presented including entropy cleaning of basis vectors, better light curve normalization methods, application to short cadence data and a goodness metric which can be used to numerically evaluate the performance of the cotrending. The goodness metric can then be introduced into the merit function as a Lagrange multiplier and the fit iterated to improve performance. Funding for the Kepler Discovery Mission is provided by NASA's Science Mission Directorate.

  2. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  3. An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.

    USGS Publications Warehouse

    Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

    1983-01-01

    Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

  4. Mitigating systematic errors in angular correlation function measurements from wide field surveys

    NASA Astrophysics Data System (ADS)

    Morrison, C. B.; Hildebrandt, H.

    2015-12-01

    We present an investigation into the effects of survey systematics such as varying depth, point spread function size, and extinction on the galaxy selection and correlation in photometric, multi-epoch, wide area surveys. We take the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) as an example. Variations in galaxy selection due to systematics are found to cause density fluctuations of up to 10 per cent for some small fraction of the area for most galaxy redshift slices and as much as 50 per cent for some extreme cases of faint high-redshift samples. This results in correlations of galaxies against survey systematics of order ˜1 per cent when averaged over the survey area. We present an empirical method for mitigating these systematic correlations from measurements of angular correlation functions using weighted random points. These weighted random catalogues are estimated from the observed galaxy overdensities by mapping these to survey parameters. We are able to model and mitigate the effect of systematic correlations allowing for non-linear dependences of density on systematics. Applied to CFHTLenS, we find that the method reduces spurious correlations in the data by a factor of 2 for most galaxy samples and as much as an order of magnitude in others. Such a treatment is particularly important for an unbiased estimation of very small correlation signals, as e.g. from weak gravitational lensing magnification bias. We impose a criterion for using a galaxy sample in a magnification measurement of the majority of the systematic correlations show improvement and are less than 10 per cent of the expected magnification signal when combined in the galaxy cross-correlation. After correction the galaxy samples in CFHTLenS satisfy this criterion for zphot < 0.9 and will be used in a future analysis of magnification.

  5. Error estimation for CFD aeroheating prediction under rarefied flow condition

    NASA Astrophysics Data System (ADS)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ? is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ?, compared with two other parameters, Kn? and Ma?Kn?.

  6. Sampling errors in rainfall estimates by multiple satellites

    NASA Technical Reports Server (NTRS)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert

    1993-01-01

    This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.

  7. Nonlocal treatment of systematic errors in the processing of sparse and incomplete sensor data

    SciTech Connect

    Beckerman, M.; Oblow, E.M.

    1988-03-01

    A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse and incomplete sensor data. We present a detailed application of this methodology to the construction of navigation maps from wide-angle sonar sensor data acquired by the HERMIES IIB mobile robot. Our uncertainty approach is explcitly nonlocal. We use a binary labelling scheme and a simple logic for the rule of combination. We then correct erroneous interpretations of the data by analyzing pixel patterns of conflict and by imposing consistent labelling conditions. 9 refs., 6 figs.

  8. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  9. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  10. Observing transiting exoplanets: Removing systematic errors to constrain atmospheric chemistry and dynamics

    NASA Astrophysics Data System (ADS)

    Zellem, Robert Thomas

    2015-03-01

    The > 1500 confirmed exoplanets span a wide range of planetary masses ( 1 MEarth -20 MJupiter), radii ( 0.3 R Earth -2 RJupiter), semi-major axes ( 0.005-100 AU), orbital periods ( 0.3-1 x 105 days), and host star spectral types. The effects of a widely-varying parameter space on a planetary atmosphere's chemistry and dynamics can be determined through transiting exoplanet observations. An exoplanet's atmospheric signal, either in absorption or emission, is on the order of 0.1% which is dwarfed by telescope-specific systematic error sources up to 60%. This thesis explores some of the major sources of error and their removal from space- and ground-based observations, specifically Spitzer /IRAC single-object photometry, IRTF/SpeX and Palomar/TripleSpec low-resolution single-slit near-infrared spectroscopy, and Kuiper/Mont4k multi-object photometry. The errors include pointing-induced uncertainties, airmass variations, seeing-induced signal loss, telescope jitter, and system variability. They are treated with detector efficiency pixel-mapping, normalization routines, a principal component analysis, binning with the geometric mean in Fourier-space, characterization by a comparison star, repeatability, and stellar monitoring to get within a few times of the photon noise limit. As a result, these observations provide strong measurements of an exoplanet's dynamical day-to-night heat transport, constrain its CH4 abundance, investigate emission mechanisms, and develop an observing strategy with smaller telescopes. The reduction methods presented here can also be applied to other existing and future platforms to identify and remove systematic errors. Until such sources of uncertainty are characterized with bright systems with large planetary signals for platforms such as the James Webb Space Telescope, for example, one cannot resolve smaller objects with more subtle spectral features, as expected of exo-Earths.

  11. A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations

    SciTech Connect

    Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

    1999-11-03

    An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.

  12. On GPS Water Vapour estimation and related errors

    NASA Astrophysics Data System (ADS)

    Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha

    2010-05-01

    Water vapour (WV) is one of the most important constituents of the atmosphere: it plays a crucial role in the earth's radiation budget in the absorption processes both of the incoming shortwave and the outgoing longwave radiation; it is one of the main greenhouse gases of the atmosphere, by far the one with higher concentration. In addition moisture and latent heat are transported through the WV phase, which is one of the driving factor of the weather dynamics, feeding the cloud systems evolution. An accurate, dense and frequent sampling of WV at different scales, is consequently of great importance for climatology and meteorology research as well as operational weather forecasting. Since the development of the satellite positioning systems, it has been clear that the troposphere and its WV content were a source of delay in the positioning signal, in other words a source of error in the positioning process or in turn a source of information in meteorology. The use of the GPS (Global Positioning System) signal for WV estimation has increased in recent years, starting from measurements collected from a ground-fixed dual frequency GPS geodetic station. This technique for processing the GPS data is based on measuring the signal travel time in the satellite-receiver path and then processing such signal to filter out all delay contributions except the tropospheric one. Once the troposheric delay is computed, the wet and dry part are decoupled under some hypotheses on the tropospheric structure and/or through ancillary information on pressure and temperature. The processing chain normally aims at producing a vertical Integrated Water Vapour (IWV) value. The other non troposheric delays are due to ionospheric free electrons, relativistic effects, multipath effects, transmitter and receiver instrumental biases, signal bending. The total effect is a delay in the signal travel time with respect to the geometrical straight path. The GPS signal has the advantage to be nearly costless and practically continuous (every second) with respect to the atmospheric dynamics. The spatial resolution is correlated to the number and spatial distance (i.e. density) of ground fixed stations and in principle can be very high (for sure it is increasing). The problem can reside in the errors made in the decoupling of the various delay components and in the approximation assumed for the computation of the IWV from the wet delay component. Such errors often are "masked" by the use of the available software packages for GPS data processing and, as a consequence, it is easier to find, associated to the final WV products, errors given from a posteriori validation processes rather than derived from rigorous error propagation analyses. In this work we want to present a technique to compute the different components necessary to retrieve WV measurements from the GPS signal, with a critical analysis of all approximations and errors made in the processing procedure also in perspectives of the great opportunity that the European GALILEO system will bring in this field too.

  13. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  14. Kriging regression of PIV data using a local error estimate

    NASA Astrophysics Data System (ADS)

    de Baar, Jouke H. S.; Percin, Mustafa; Dwight, Richard P.; van Oudheusden, Bas W.; Bijl, Hester

    2014-01-01

    The objective of the method described in this work is to provide an improved reconstruction of an original flow field from experimental velocity data obtained with particle image velocimetry (PIV) technique, by incorporating the local accuracy of the PIV data. The postprocessing method we propose is Kriging regression using a local error estimate (Kriging LE). In Kriging LE, each velocity vector must be accompanied by an estimated measurement uncertainty. The performance of Kriging LE is first tested on synthetically generated PIV images of a two-dimensional flow of four counter-rotating vortices with various seeding and illumination conditions. Kriging LE is found to increase the accuracy of interpolation to a finer grid dramatically at severe reflection and low seeding conditions. We subsequently apply Kriging LE for spatial regression of stereo-PIV data to reconstruct the three-dimensional wake of a flapping-wing micro air vehicle. By qualitatively comparing the large-scale vortical structures, we show that Kriging LE performs better than cubic spline interpolation. By quantitatively comparing the interpolated vorticity to unused measurement data at intermediate planes, we show that Kriging LE outperforms conventional Kriging as well as cubic spline interpolation.

  15. Detecting Positioning Errors and Estimating Correct Positions by Moving Window

    PubMed Central

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282

  16. Adaptive error covariances estimation methods for ensemble Kalman filters

    NASA Astrophysics Data System (ADS)

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry-Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry-Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry-Sauer's method on L-96 example.

  17. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvo?ek, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  18. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  19. Systematic Error in Hippocampal Volume Asymmetry Measurement is Minimal with a Manual Segmentation Protocol

    PubMed Central

    Rogers, Baxter P.; Sheffield, Julia M.; Luksik, Andrew S.; Heckers, Stephan

    2012-01-01

    Hemispheric asymmetry of hippocampal volume is a common finding that has biological relevance, including associations with dementia and cognitive performance. However, a recent study has reported the possibility of systematic error in measurements of hippocampal asymmetry by magnetic resonance volumetry. We manually traced the volumes of the anterior and posterior hippocampus in 40 healthy people to measure systematic error related to image orientation. We found a bias due to the side of the screen on which the hippocampus was viewed, such that hippocampal volume was larger when traced on the left side of the screen than when traced on the right (p?=?0.05). However, this bias was smaller than the anatomical right?>?left asymmetry of the anterior hippocampus. We found right?>?left asymmetry of hippocampal volume regardless of image presentation (radiological versus neurological). We conclude that manual segmentation protocols can minimize the effect of image orientation in the study of hippocampal volume asymmetry, but our confirmation that such bias exists suggests strategies to avoid it in future studies. PMID:23248580

  20. The mathematical origins of the kinetic compensation effect: 2. The effect of systematic errors.

    PubMed

    Barrie, Patrick J

    2012-01-01

    The kinetic compensation effect states that there is a linear relationship between Arrhenius parameters ln A and E for a family of related processes. It is a widely observed phenomenon in many areas of science, notably heterogeneous catalysis. This paper explores mathematical, rather than physicochemical, explanations for the compensation effect in certain situations. Three different topics are covered theoretically and illustrated by examples. Firstly, the effect of systematic errors in experimental kinetic data is explored, and it is shown that these create apparent compensation effects. Secondly, analysis of kinetic data when the Arrhenius parameters depend on another parameter is examined. In the case of temperature programmed desorption (TPD) experiments when the activation energy depends on surface coverage, it is shown that a common analysis method induces a systematic error, causing an apparent compensation effect. Thirdly, the effect of analysing the temperature dependence of an overall rate of reaction, rather than a rate constant, is investigated. It is shown that this can create an apparent compensation effect, but only under some conditions. This result is illustrated by a case study for a unimolecular reaction on a catalyst surface. Overall, the work highlights the fact that, whenever a kinetic compensation effect is observed experimentally, the possibility of it having a mathematical origin should be carefully considered before any physicochemical conclusions are drawn. PMID:22080227

  1. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg

    2015-07-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.

  2. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvo?á?ek, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  3. Reduced basis approximation and a posteriori error estimation for the time-dependent viscous Burgers’ equation

    E-print Network

    Nguyen, Ngoc Cuong

    In this paper we present rigorous a posteriori L 2 error bounds for reduced basis approximations of the unsteady viscous Burgers’ equation in one space dimension. The a posteriori error estimator, derived from standard ...

  4. Measurement Error Webinar Series: Estimating usual intake distributions for multivariate dietary variables

    Cancer.gov

    Identify challenges in addressing measurement error when modeling multivariate dietary variables such as diet quality indices. Describe statistical modeling techniques to correct for measurement error in estimating multivariate dietary variables.

  5. Estimating Measurement Error of the Patient Activation Measure for Respondents with Partially Missing Data

    PubMed Central

    Linden, Ariel

    2015-01-01

    The patient activation measure (PAM) is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve), using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE) for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum) and 44.4% (maximum). Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing. PMID:26636096

  6. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  7. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  8. Quantifying and minimising systematic and random errors in X-ray micro-tomography based volume measurements

    NASA Astrophysics Data System (ADS)

    Lin, Q.; Neethling, S. J.; Dobson, K. J.; Courtois, L.; Lee, P. D.

    2015-04-01

    X-ray micro-tomography (XMT) is increasingly used for the quantitative analysis of the volumes of features within the 3D images. As with any measurement, there will be error and uncertainty associated with these measurements. In this paper a method for quantifying both the systematic and random components of this error in the measured volume is presented. The systematic error is the offset between the actual and measured volume which is consistent between different measurements and can therefore be eliminated by appropriate calibration. In XMT measurements this is often caused by an inappropriate threshold value. The random error is not associated with any systematic offset in the measured volume and could be caused, for instance, by variations in the location of the specific object relative to the voxel grid. It can be eliminated by repeated measurements. It was found that both the systematic and random components of the error are a strong function of the size of the object measured relative to the voxel size. The relative error in the volume was found to follow approximately a power law relationship with the volume of the object, but with an exponent that implied, unexpectedly, that the relative error was proportional to the radius of the object for small objects, though the exponent did imply that the relative error was approximately proportional to the surface area of the object for larger objects. In an example application involving the size of mineral grains in an ore sample, the uncertainty associated with the random error in the volume is larger than the object itself for objects smaller than about 8 voxels and is greater than 10% for any object smaller than about 260 voxels. A methodology is presented for reducing the random error by combining the results from either multiple scans of the same object or scans of multiple similar objects, with an uncertainty of less than 5% requiring 12 objects of 100 voxels or 600 objects of 4 voxels. As the systematic error in a measurement cannot be eliminated by combining the results from multiple measurements, this paper introduces a procedure for using volume standards to reduce the systematic error, especially for smaller objects where the relative error is larger.

  9. Effects of resighting errors on capture-resight estimates for neck-banded Canada geese

    USGS Publications Warehouse

    Weiss, N.T.; Samuel, M.D.; Rusch, D.H.; Caswell, F.D.

    1991-01-01

    Biologists who study neck-banded Canada Geese (Branta canadensis) have used capture and resighting histories to estimate annual resighting rates, survival rates and the number of marked birds in the population. Resighting errors were associated with 9.4% (n = 155) of the birds from a sample of Canada Geese neckbanded in the Mississippi flyway, 1974-1987, and constituted 3.0% (n = 208) of the resightings. Resighting errors significantly reduced estimated resighting rates and significantly increased estimated numbers of marked geese in the sample. Estimates of survival rates were not significantly affected by resighting errors. Recommendations are offered for using neck-band characters that may reduce resighting errors.

  10. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  11. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

  12. RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

    SciTech Connect

    GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

    1998-06-22

    The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

  13. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    SciTech Connect

    Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu

    2011-03-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

  14. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2013-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  15. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  16. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  17. Strategies for Assessing Diffusion Anisotropy on the Basis of Magnetic Resonance Images: Comparison of Systematic Errors

    PubMed Central

    Boujraf, Saïd

    2014-01-01

    Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

  18. The Shane-Wirtanen counts - Systematics and two-point correlation function. [for astronomical map error analysis

    NASA Technical Reports Server (NTRS)

    De Lapparent, V.; Kurtz, M. J.; Geller, M. J.

    1986-01-01

    Residual errors in the Selder et al. (SSGP) map which caused a break in both the correlation factor (CF) and the filamentary appearance of the Shane-Wirtanen map are examined. These errors, causing a residual rms fluctuation of 11 percent in the SSGP-corrected counts and a systematic rms offset of 8 percent in the mean count per plate, can be attributed to counting pattern and plate vignetting. Techniques for CF reconstruction in catalogs affected by plate-related systematic biases are examined, and it is concluded that accurate restoration may not be possible. Surveys designed to measure the CF at the depth of the SW counts on a scale of 2.5 deg, must have systematic errors of less than or about 0.04 mag.

  19. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    E-print Network

    Li, T S; Marshall, J L; Tucker, D; Kessler, R; Annis, J; Bernstein, G M; Boada, S; Burke, D L; Finley, D A; James, D J; Kent, S; Lin, H; Marriner, J; Mondrik, N; Nagasawa, D; Rykoff, E S; Scolnic, D; Walker, A R; Wester, W; Abbott, T M C; Allam, S; Benoit-Lévy, A; Bertin, E; Brooks, D; Capozzi, D; Rosell, A Carnero; Kind, M Carrasco; Carretero, J; Crocce, M; Cunha, C E; D'Andrea, C B; da Costa, L N; Desai, S; Diehl, H T; Doel, P; Flaugher, B; Fosalba, P; Frieman, J; Gaztanaga, E; Goldstein, D A; Gruen, D; Gruendl, R A; Gutierrez, G; Honscheid, K; Kuehn, K; Kuropatkin, N; Maia, M A G; Melchior28, P; Miller, C J; Miquel, R; Mohr, J J; Neilsen, E; Nichol, R C; Nord, B; Ogando, R; Plazas, A A; Romer, A K; Roodman, A; Sako, M; Sanchez, E; Scarpine, V; Schubnell, M; Sevilla-Noarbe, I; Smith, R C; Soares-Santos, M; Sobreira, F; Suchyta, E; Tarle, G; Thomas, D; Vikram, V

    2016-01-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example...

  20. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.

  1. An estimate of asthma prevalence in Africa: a systematic analysis

    PubMed Central

    Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

    2013-01-01

    Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge. PMID:24382846

  2. Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors

    NASA Astrophysics Data System (ADS)

    Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

    2013-05-01

    GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

  3. Autonomous error bounding of position estimates from GPS and Galileo

    E-print Network

    Temple, Thomas J. (Thomas John)

    2006-01-01

    In safety-of-life applications of satellite-based navigation, such as the guided approach and landing of an aircraft, the most important question is whether the navigation error is tolerable. Although differentially corrected ...

  4. Fluctuations of refractivity as a systematic error source in radio occultations

    NASA Astrophysics Data System (ADS)

    Gorbunov, Michael E.; Vorob'ev, Valery V.; Lauritsen, Kent B.

    2015-07-01

    The fact that fluctuations of refractivity may result in a systematic negative shift of the phase of waves propagating in a random medium is known for a long time. Tatarskii was the first to reveal it, and von Eshleman put this into the context of the radio occultation sounding of planetary atmospheres. In this paper, we show that this effect may also be one of the causes of the negative bias of refractivity retrieved for radio occultation observations of the Earth's atmosphere. We perform theoretical estimates of this effect based on the Rytov approximation. These estimates, however, do not consider the regular refraction, which may significantly change the magnitude of this effect. We perform numerical simulations of radio occultations, based on the Kolmogorov-von Kármán isotropic spectrum of refractivity fluctuations, with the internal and external scales and magnitude tuned so as to reproduce the realistic level of the variance of retrieved refractivity and the amplitude fluctuations of the modeled signals. The model of the regular atmosphere is based on analyses of the European Centre for Medium-Range Weather Forecasts. We show that it is possible to set up a vertical profile of the structural constant of the fluctuation spectrum such that it will result in a systematic shift and variances of the retrieved refractivity consistent with those observed for COSMIC measurements.

  5. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  6. A-posteriori error estimation and adaptivity for linear elasticity using the reciprocal theorem

    E-print Network

    Cirak, Fehmi

    duality arguments are used to derive error estimators for the finite element approximation of various The significance of a-posteriori error control and adaptive algorithms for general finite element computations estimates. However for practical applications like stress analysis the globally defined energy norm

  7. ERROR ESTIMATES FOR LOW-ORDER ISOPARAMETRIC QUADRILATERAL FINITE ELEMENTS FOR PLATES

    E-print Network

    Duran, Ricardo

    of the plate thickness. We also obtain error estimates for the approximation of the plate vibration problem widely used for the anal- ysis of thin or moderately thick elastic plates. It is now very well understood, optimal order error estimates, valid uniformly on the plate thickness, have been obtained for several

  8. Rapid gravitational wave parameter estimation with a single spin: Systematic uncertainties in parameter estimation with the SpinTaylorF2 approximation

    E-print Network

    Brandon Miller; Richard O'Shaughnessy; Tyson B. Littenberg; Ben Farr

    2015-06-19

    Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic followup facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the tradeoff between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (STF2), in parameter estimation with lalinference_mcmc. Though efficient, the STF2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically-motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the STF2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic followup: whether the secondary is has a mass consistent with a neutron star; whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.

  9. Rapid gravitational wave parameter estimation with a single spin: Systematic uncertainties in parameter estimation with the SpinTaylorF2 approximation

    NASA Astrophysics Data System (ADS)

    Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.

    2015-08-01

    Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.

  10. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  11. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  12. Genetic algorithms based robust frequency estimation of sinusoidal signals with stationary errors

    E-print Network

    Kundu, Debasis

    of the sinusoidal model with high degree of accuracy. Among the proposed methods, the genetic algorithm based leastGenetic algorithms based robust frequency estimation of sinusoidal signals with stationary errors July 2009 Keywords: Genetic algorithms L1-norm estimator Least median estimator Least square estimator

  13. Nuclear power plant fault diagnosis using neural networks with error estimation by series association

    SciTech Connect

    Kim, K.

    1996-08-01

    The accuracy of the diagnosis obtained from a nuclear power plant fault-diagnostic advisor using neural networks is addressed in this paper in order to ensure the credibility of the diagnosis. A new error estimation scheme called error estimation by series association provides a measure of the accuracy associated with the advisor`s diagnoses. This error estimation is performed by a secondary neural network that is fed both the input features for and the outputs of the advisor. The error estimation by series association outperforms previous error estimation techniques in providing more accurate confidence information with considerably reduced computational requirements. The authors demonstrate the extensive usability of their method by applying it to a complicated transient recognition problem of 33 transient scenarios. The simulated transient data at different severities consists of 25 distinct transients for the Duane Arnold Energy Center nuclear power station ranging from a main steam line break to anticipated transient without scram (ATWS) conditions. The fault-diagnostic advisor system with the secondary error prediction network is tested on the transients at various severity levels and degraded noise conditions. The results show that the error estimation scheme provides a useful measure of the validity of the advisor`s output or diagnosis with considerable reduction in computational requirements over previous error estimation schemes.

  14. Systematic residual ionospheric errors in radio occultation data and a potential way to minimize them

    NASA Astrophysics Data System (ADS)

    Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

    2013-08-01

    Radio occultation (RO) sensing is used to probe the earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25-30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem, we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001-2011. We could observe that the nighttime bending angle bias stays constant over the whole period of 11 yr, while the daytime bias increases from low to high solar activity. As a result, the difference between nighttime and daytime bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of daytime RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this simulation we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

  15. Systematic Residual Ionospheric Errors in Radio Occultation Data and a Potential Way to Minimize them

    NASA Astrophysics Data System (ADS)

    Danzer, Julia; Scherllin-Pirscher, Barbara; Foelsche, Ulrich

    2013-04-01

    Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05?rad to -0.4?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

  16. Systematic residual ionospheric errors in radio occultation data and a potential way to minimize them

    NASA Astrophysics Data System (ADS)

    Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

    2013-02-01

    Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 yr, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

  17. Reforming Triple Collocation: Beyond Three Estimates and Separation of Structural/Non-structural Errors

    NASA Astrophysics Data System (ADS)

    Pan, M.; Zhan, W.; Fisher, C. K.; Crow, W. T.; Wood, E. F.

    2014-12-01

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the multiple collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is slightly different from the original inner product solution but easier to extend to multiple collocation cases. The Pythagorean solution is fully equivalent to the original inner product solution for the triple collocation case. The multiple collocation turns out to be an over-constrained problem and a least squared solution is presented. As the most critical assumption of uncorrelated errors will almost for sure fail in multiple collocation problems, we propose to divide the source estimates into structural categories and treat the structural and non-structural errors separately. Such error separation allows the source estimates to have their structural errors fully correlated within the same structural category, which is much more realistic than the original assumption. A new error assessment procedure is developed which performs the collocation twice, each for one type of errors, and then sums up the two types of errors. The new procedure is also fully backward compatible with the original triple collocation. Error assessment experiments are carried out for surface soil moisture data from multiple remote sensing models, land surface models, and in situ measurements.

  18. Estimation of finite population parameters with auxiliary information and response error ?

    PubMed Central

    González, L. M.; Singer, J. M.; Stanek, E.J.

    2014-01-01

    We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error. PMID:25089123

  19. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  20. Extent of error in estimating nutrient intakes from food tables versus laboratory estimates of cooked foods.

    PubMed

    Chiplonkar, Shashi Ajit; Agte, Vaishali Vilas

    2007-01-01

    Individual cooked foods (104) and composite meals (92) were examined for agreement between nutritive value estimated by indirect analysis (E) (Indian National database of nutrient composition of raw foods, adjusted for observed moisture contents of cooked recipes), and by chemical analysis in our laboratory (M). The extent of error incurred in using food table values with moisture correction for estimating macro as well as micronutrients at food level and daily intake level was quantified. Food samples were analyzed for contents of iron, zinc, copper, beta-carotene, riboflavin, thiamine, ascorbic acid, folic acid and also for macronutrients, phytate and dietary fiber. Mean percent difference in energy content between E and M was 3.07+/-0.6%, that for protein was 5.3+/-2.0%, for fat was 2.6+/-1.8% and for carbohydrates was 5.1+/-0.9%. Mean percent difference in vitamin contents between E and M ranged from 32 (vitamin C) to 45.5% (beta-carotene content); and that for minerals between 5.6 (copper) to 19.8% (zinc). Percent E/M were computed for daily nutrient intakes of 264 apparently healthy adults. These were observed to be 108, 112, 127 and 97 for energy, protein, fat and carbohydrates respectively. Percent E/M for their intakes of copper (102) and beta-carotene (114) were closer to 100 but these were very high in the case of zinc (186), iron (202), and vitamins C (170), thiamine (190), riboflavin (181) and folic acid (165). Estimates based on food composition table values with moisture correction show macronutrients for cooked foods to be within +/- 5% whereas at daily intake levels the error increased up to 27%. The lack of good agreement in the case of several micronutrients indicated that the use of Indian food tables for micronutrient intakes would be inappropriate. PMID:17468077

  1. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  2. Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors

    NASA Astrophysics Data System (ADS)

    Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice

    2014-03-01

    Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.

  3. DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS

    SciTech Connect

    Giuppone, C. A.; Beauge, C.; Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A.

    2009-07-10

    We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

  4. Estimating Precipitation Errors Using Spaceborne Surface Soil Moisure Retrievals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Limitations in the availability of ground-based rain gauge data currently hamper our ability to quantify errors in global precipitation products over data-poor areas of the world. Over land, these limitations may be eased by approaches based on interpreting the degree of dynamic consistency existin...

  5. Error Estimating Codes for Insertion and Deletion Jiwei Huang

    E-print Network

    Lall, Ashwin

    probability than bit flipping errors. Our idEEC design can build upon any existing EEC scheme. The basic idea or classroom use is granted without fee provided that copies are not made or distributed for profit in the packet flipped during the transmission (i.e., the BER). Such codes are in general stronger ­ and a bit

  6. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. PMID:24795033

  7. An online model correction method based on an inverse problem: Part II—systematic model error correction

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-11-01

    An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.

  8. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  9. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  10. Comparison of weak lensing by NFW and Einasto halos and systematic errors

    E-print Network

    Sereno, Mauro; Moscardini, Lauro

    2015-01-01

    Recent $N$-body simulations have shown that Einasto radial profiles provide the most accurate description of dark matter halos. Predictions based on the traditional NFW functional form may fail to describe the structural properties of cosmic objects at the percent level required by precision cosmology. We computed the systematic errors expected for weak lensing analyses of clusters of galaxies if one wrongly models the lens properties. Even though the NFW fits of observed tangential shear profiles can be excellent, viral masses and concentrations of very massive halos ($>\\sim10^{15}M_\\odot/h$) can be over- and underestimated by $\\sim 10$ per cent, respectively. Misfitting effects also steepen the observed mass-concentration relation, in a way similar to that seen in multiwavelength observations of galaxy groups and clusters. Einasto lenses can be distinguished from NFW halos either with deep observations of very massive structures ($>\\sim10^{15}M_\\odot/h$) or by stacking the shear profiles of thousands of gro...

  11. Systematic Energy Errors and the Tendency toward Canonical Equilibrium in Atmospheric Circulation Models.

    NASA Astrophysics Data System (ADS)

    Frederiksen, Jorgen S.; Dix, Martin R.; Kepert, Steven M.

    1996-03-01

    Systematic kinetic energy errors are examined in barotropic and multilevel general circulation models. The dependence of energy spectra on resolution and dissipation and, in addition for the barotropic model, on topography and the beta effect, is studied. We propose explanations for the behavior of simulated kinetic energy spectra by relating them to canonical equilibrium spectra characterized by entropy maximization. Equilibrium spectra at increased resolution tend to have increased large-scale kinetic energy and a drop in amplitude at intermediate and small scales. This qualitative behavior may also be found in forced and/or dissipative simulations if the forcing and dissipation operators acting on the common scales are very similar at different resolutions.An explanation for the tail `wagging the dog' effect is presented. This effect, where scale-selective dissipation operators cause a drop in the tail of the energy spectra and, surprisingly, also an increase in the large-scale energy, is found to occur in both barotropic and multilevel general circulation models. It is shown to rely on the dissipation operators dissipating enstrophy while leaving the total kinetic energy approximately conserved.A new (short time) canonical equilibrium model and explanation of zonalization due to the beta-effect is presented; the meridionally elongated large-scale waves are regarded as adiabatic invariants, while the zonal flow and other eddies interact and equilibrate on a short timescale.

  12. Reducing systematic errors in time-frequency resolved mode number analysis

    NASA Astrophysics Data System (ADS)

    Horváth, L.; Poloskei, P. Zs; Papp, G.; Maraschek, M.; Schuhbeck, K. H.; Pokol, G. I.; the EUROfusion MST1 Team; the ASDEX Upgrade Team

    2015-12-01

    The present paper describes the effect of magnetic pick-up coil transfer functions on mode number analysis in magnetically confined fusion plasmas. Magnetic probes mounted inside the vacuum chamber are widely used to characterize the mode structure of magnetohydrodynamic modes, as, due to their relative simplicity and compact nature, several coils can be distributed over the vessel. Phase differences between the transfer functions of different magnetic pick-up coils lead to systematic errors in time- and frequency resolved mode number analysis. This paper presents the first in situ, end-to-end calibration of a magnetic pick-up coil system which was carried out by using an in-vessel driving coil on ASDEX Upgrade. The effect of the phase differences in the pick-up coil transfer functions is most significant in the 50–250?kHz frequency range, where the relative phase shift between the different probes can be up to 1 radian (?60°). By applying a correction based on the transfer functions we found smaller residuals of mode number fitting in the considered discharges. In most cases an order of magnitude improvement was observed in the residuals of the mode number fits, which could open the way to investigate weaker electromagnetic oscillations with even high mode numbers.

  13. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  14. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  15. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  16. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  17. Estimating extreme flood events - assumptions, uncertainty and error

    NASA Astrophysics Data System (ADS)

    Franks, S. W.; White, C. J.; Gensen, M.

    2015-06-01

    Hydrological extremes are amongst the most devastating forms of natural disasters both in terms of lives lost and socio-economic impacts. There is consequently an imperative to robustly estimate the frequency and magnitude of hydrological extremes. Traditionally, engineers have employed purely statistical approaches to the estimation of flood risk. For example, for an observed hydrological timeseries, each annual maximum flood is extracted and a frequency distribution is fit to these data. The fitted distribution is then extrapolated to provide an estimate of the required design risk (i.e. the 1% Annual Exceedance Probability - AEP). Such traditional approaches are overly simplistic in that risk is implicitly assumed to be static, in other words, that climatological processes are assumed to be randomly distributed in time. In this study, flood risk estimates are evaluated with regards to traditional statistical approaches as well as Pacific Decadal Oscillation (PDO)/El Niño-Southern Oscillation (ENSO) conditional estimates for a flood-prone catchment in eastern Australia. A paleo-reconstruction of pre-instrumental PDO/ENSO occurrence is then employed to estimate uncertainty associated with the estimation of the 1% AEP flood. The results indicate a significant underestimation of the uncertainty associated with extreme flood events when employing the traditional engineering estimates.

  18. State and model error estimation for distributed parameter systems. [in large space structure control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors in order to detect inevitable deficiencies in large structure controller/estimator models is discussed. Such an estimation process is particularly applicable in the area of shape control system design required to maintain a prescribed static structural shape and, in addition, suppress dynamic disturbances due to the vehicle vibrational modes. The paper outlines a solution to the problem of static shape estimation where the vehicle shape must be reconstructed from a set of measurements discretely located throughout the structure. The estimation process is based on the principle of least-squares that inherently contains the definition and explicit computation of model error estimates that are optimal in some sense. Consequently, a solution is provided for the problem of estimation of static model errors (e.g., external loads). A generalized formulation applicable to distributed parameters systems is first worked out and then applied to a one-dimensional beam-like structural configuration.

  19. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem.

    PubMed

    Delaigle, Aurore; Fan, Jianqing; Carroll, Raymond J

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions. PMID:20351800

  20. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  1. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  2. Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications 

    E-print Network

    Garcia, Tanya

    2012-10-19

    of the trait outcomes. Current methods for such data include a Cox proportional hazards model which is susceptible to model misspecification, and two types of nonparametric maximum likelihood estimators which are either inefficient or inconsistent. Using...

  3. How well can we estimate error variance of satellite precipitation data around the world?

    NASA Astrophysics Data System (ADS)

    Gebregiorgis, Abebe S.; Hossain, Faisal

    2015-03-01

    Providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating the square difference prediction of satellite precipitation (hereafter used synonymously with "error variance") using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. Building on a suite of recent studies that have developed the error variance models, the goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as a function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the error variance of satellite precipitation products. The regression approach yielded good performance skill with high correlation between simulated and observed error variances. The correlation ranged from 0.46 to 0.98 during the independent validation period. In most cases (~ 85% of the scenarios), the correlation was higher than 0.72. The error variance models also captured the spatial distribution of observed error variance adequately for all study regions while producing unbiased residual error. The approach is promising for regions where missed precipitation is not a common occurrence in satellite precipitation estimation. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

  4. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  5. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    ERIC Educational Resources Information Center

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error

  6. A Posteriori Error Estimates with Post-Processing for Nonconforming Finite Elements

    E-print Network

    Schieweck, Friedhelm

    A Posteriori Error Estimates with Post-Processing for Nonconforming Finite Elements F. Schieweck #3 Magdeburg, Germany Abstract For a nonconforming #12;nite element approximation of an elliptic model problem-processing error" between the original nonconforming #12;nite element solution and an easy computable conforming

  7. Goal-oriendted local a posteriori error estimator for H(div)

    E-print Network

    2011-12-15

    Dec 15, 2011 ... Key words. finite element methods, a posteriori error estimates, ... practical problems, phenomenon of interest is much smaller than the typical problem .... tensor A is straightforward, where the matrix A is symmetric, .... used to isolate a local region of interest, the pollution error from outside the region is.

  8. Effects of structural error on the estimates of parameters of dynamical systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  9. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    ERIC Educational Resources Information Center

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  10. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  11. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  12. Efficient Small Area Estimation in the Presence of Measurement Error in Covariates 

    E-print Network

    Singh, Trijya

    2012-10-19

    areas. There is a variety of methods available for bias correction of estimates in the presence of measurement error. We applied the simulation extrapolation (SIMEX), ordinary corrected scores and Monte Carlo corrected scores methods of bias correction...

  13. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  14. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    E-print Network

    Locatelli, R.

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...

  15. A Genetic Algorithm for Generating Radar Transmit Codes to Minimize the Target Profile Estimation Error

    E-print Network

    Smith-Matrinez, Brien; Agah, Arvin; Stiles, James M.

    2013-01-01

    This article presents the design and development of a genetic algorithm (GA) to generate long-range transmit codes with low autocorrelation side lobes for radar to minimize target profile estimation error. The GA described in this work has a...

  16. Improved Error Estimation of Dynamic Finite Element Methods for Second Order Parabolic Equations \\Lambda

    E-print Network

    Yang, Daoqi

    Improved Error Estimation of Dynamic Finite Element Methods for Second Order Parabolic Equations problems. Standard, characteristic, and mixed finite element methods with dynamic function spaces the finite element method requires capabilities for efficient, dynamic, and self­adaptive local grid

  17. Estimated errors in retrievals of ocean parameters from SSMIS

    NASA Astrophysics Data System (ADS)

    Mears, Carl A.; Smith, Deborah K.; Wentz, Frank J.

    2015-06-01

    Measurements made by microwave imaging radiometers can be used to retrieve several environmental parameters over the world's oceans. In this work, we calculate the uncertainty in retrievals obtained from the Special Sensor Microwave Imager Sounder (SSMIS) instrument caused by uncertainty in the input parameters to the retrieval algorithm. This work applies to the version 7 retrievals of surface wind speed, total column water vapor, total column cloud liquid water, and rain rate produced by Remote Sensing Systems. Our numerical approach allows us to calculate an estimated input-induced uncertainty for every valid retrieval during the SSMIS mission. Our uncertainty estimates are consistent with the differences observed between SSMIS wind speed and vapor measurements made by SSMIS on the F16 and F17 satellites, supporting their accuracy. The estimates do not explain the larger differences between the SSMIS measurements of wind speed and vapor and other sources of these data, consistent with the influence of more sources of uncertainty.

  18. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  19. Improved estimates of coordinate error for molecular replacement

    SciTech Connect

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-11-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

  20. Auxiliary results for "Nonparametric kernel estimation of the probability density function of regression errors using estimated residuals"

    E-print Network

    Samb, Rawane

    2012-01-01

    This manuscript is a supplemental document providing the omitted material for our paper entitled "Nonparametric kernel estimation of the probability density function of regression errors using estimated residuals" [arXiv:1010.0439]. The paper is submitted to Journal of Nonparametric Statistics.

  1. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  2. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  3. EIA Corrects Errors in Its Drilling Activity Estimates Series

    EIA Publications

    1998-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  4. Gap filling strategies and error in estimating annual soil respiration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

  5. Errors of level in spinal surgery: an evidence-based systematic review.

    PubMed

    Longo, U G; Loppini, M; Romeo, G; Maffulli, N; Denaro, V

    2012-11-01

    Wrong-level surgery is a unique pitfall in spinal surgery and is part of the wider field of wrong-site surgery. Wrong-site surgery affects both patients and surgeons and has received much media attention. We performed this systematic review to determine the incidence and prevalence of wrong-level procedures in spinal surgery and to identify effective prevention strategies. We retrieved 12 studies reporting the incidence or prevalence of wrong-site surgery and that provided information about prevention strategies. Of these, ten studies were performed on patients undergoing lumbar spine surgery and two on patients undergoing lumbar, thoracic or cervical spine procedures. A higher frequency of wrong-level surgery in lumbar procedures than in cervical procedures was found. Only one study assessed preventative strategies for wrong-site surgery, demonstrating that current site-verification protocols did not prevent about one-third of the cases. The current literature does not provide a definitive estimate of the occurrence of wrong-site spinal surgery, and there is no published evidence to support the effectiveness of site-verification protocols. Further prevention strategies need to be developed to reduce the risk of wrong-site surgery. PMID:23109637

  6. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope's Random Error.

    PubMed

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods-quantile, empirical characteristic function (ECF) and logarithmic moment method-are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope's random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope's random error. PMID:26230698

  7. Soil Moisture Background Error Covariance Estimation in a Land-Atmosphere Coupled Model

    NASA Astrophysics Data System (ADS)

    Lin, L. F.; Ebtehaj, M.; Flores, A. N.; Wang, J.; Bras, R. L.

    2014-12-01

    The objective of this study is to estimate space-time dynamics of the soil moisture background error in a coupled land-atmosphere model for better understanding the land-atmosphere interactions and soil moisture dynamics through data assimilation. To this end, we conducted forecast experiments in eight calendar years from 2006 to 2013 using the Weather Research and Forecasting (WRF) model coupled with the Noah land surface model and estimated the background error statistics based on the National Meteorological Center (NMC) methodology. All the WRF-Noah simulations were initialized with the National Centers for Environmental Prediction (NCEP) FNL operational global analysis dataset. In our study domain, covering the contiguous United States, the results show that the soil moisture background error exhibits strong seasonal and regional patterns, with the highest magnitude occurring during the summer at the top soil layer over most regions of the Great Plains. It is also revealed that the soil moisture background errors are strongly biased at some regions, especially Southeastern United States, and bias impacts the magnitude of the error from top to bottom soil layer in an increasing order. Moreover, we also found that the estimated background error is not sensitive to the selection of WRF physics schemes of microphysics, cumulus parameterization, and land surface model. Overall, this study enhances our understanding on the space-time variability of the soil moisture background error and promises more accurate land-surface state estimates via variational data assimilation.

  8. Estimating effective model parameters for heterogeneous unsaturated flow using error models for bias correction

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Huisman, J. A.

    2012-06-01

    Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.

  9. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    ERIC Educational Resources Information Center

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  10. Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip Arthroplasty

    E-print Network

    Ulidowski, Irek

    Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip 9SY Abstract. Acetabular wear of total hip replacements can be estimated from radiographs based is not the projection of the centre of the rim so its use as a reference point to measure wear can be problematic

  11. A posteriori error estimates for the effective Hamiltonian of dislocation dynamics

    E-print Network

    Monneau, Régis

    A posteriori error estimates for the effective Hamiltonian of dislocation dynamics S. Cacace , A-Jacobi equation modelling dislocation dynamics. For the evolution problem, we prove an a posteriori estimate are provided. AMS Classification: 35B10, 35B27, 35F20, 35F25, 35Q72, 49L25, 65M06, 65M15 Keywords: dislocation

  12. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  13. Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle tracking method

    E-print Network

    Sato, Toru

    Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out- of-plane motion and deformation. Our proposed

  14. UNIFIED A POSTERIORI ERROR ESTIMATOR FOR FINITE ELEMENT METHODS FOR THE STOKES EQUATIONS

    E-print Network

    Wang, Yanqiu

    UNIFIED A POSTERIORI ERROR ESTIMATOR FOR FINITE ELEMENT METHODS FOR THE STOKES EQUATIONS JUNPING estimators for finite element methods for the Stokes equations. In particular, the authors established, finite element methods, Stokes equations AMS subject classifications. Primary, 65N15, 65N30, 76D07

  15. Non-LTE Equivalent Widths for NII with Error Estimates

    E-print Network

    Ahmed, A

    2015-01-01

    Non-LTE calculations are performed for NII in stellar atmospheric models appropriate to main sequence B-stars to produce new grids of equivalent widths for the strongest NII lines commonly used for abundance analysis. There is reasonable agreement between our calculations and previous results, although we find weaker non-LTE effects in the strongest optical NII transition. We also present a detailed estimation of the uncertainties in the equivalent widths due to inaccuracies in the atomic data via Monte Carlo simulation and investigate the completeness of our model atom in terms of included energy levels. Uncertainties in the basic NII atomic data limit the accuracy of abundance determinations to ~+/-0.10 dex at the peak of the NII optical spectrum near Teff~ 24,000 K.

  16. Systematic Review and Harmonization of Life Cycle GHG Emission Estimates for Electricity Generation Technologies (Presentation)

    SciTech Connect

    Heath, G.

    2012-06-01

    This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.

  17. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.

    PubMed

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-09-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196?m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23?m and reached 82?m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100?m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  18. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    PubMed Central

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  19. An a posteriori error estimator for shape optimization: application to EIT

    NASA Astrophysics Data System (ADS)

    Giacomini, M.; Pantz, O.; Trabelsi, K.

    2015-11-01

    In this paper we account for the numerical error introduced by the Finite Element approximation of the shape gradient to construct a guaranteed shape optimization method. We present a goal-oriented strategy inspired by the complementary energy principle to construct a constant-free, fully-computable a posteriori error estimator and to derive a certified upper bound of the error in the shape gradient. The resulting Adaptive Boundary Variation Algorithm (ABVA) is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion for the optimization loop. Some preliminary numerical results for the inverse identification problem of Electrical Impedance Tomography are presented.

  20. Influence of correlated errors on the estimation of the relaxation time spectrum in dynamic light scattering.

    PubMed

    Maier, D; Marth, M; Honerkamp, J; Weese, J

    1999-07-20

    An important step in analyzing data from dynamic light scattering is estimating the relaxation time spectrum from the correlation time function. This estimation is frequently done by regularization methods. To obtain good results with this step, the statistical errors of the correlation time function must be taken into account [J. Phys. A 6, 1897 (1973)]. So far error models assuming independent statistical errors have been used in the estimation. We show that results for the relaxation time spectrum are better if correlation between statistical errors is taken into account. There are two possible ways to obtain the error sizes and their correlations. On the one hand, they can be calculated from the correlation time function by use of a model derived by Schätzel. On the other hand, they can be computed directly from the time series of the scattered light. Simulations demonstrate that the best results are obtained with the latter method. This method requires, however, storing the time series of the scattered light during the experiment. Therefore a modified experimental setup is needed. Nevertheless the simulations also show improvement in the resulting relaxation time spectra if the error model of Schätzel is used. This improvement is confirmed when a lattice with a bimodal sphere size distribution is applied to experimental data. PMID:18323954

  1. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  2. Error estimates of spaceborne passive microwave retrievals of cloud liquid water over land

    SciTech Connect

    Greenwald, T.J.; Combs, C.L.; Jones, A.S.; Randel, D.L.; Vonder Haar, T.H.

    1999-03-01

    Cloud liquid water path (LWP) retrievals from the Special Sensor Microwave/Imager (SSM/I) and surface microwave radiometers are compared over land to assess the errors in selected satellite methods. These techniques require surface emissivity composites created from SSM/I and infrared (IR) data. Two different physical methods are tested: a single-channel (SC) approach; and a normalized polarization difference (NPD) approach. Comparisons were made at four sites in Oklahoma and Kansas over an 1-month period. The 85.5-GHz NPD method was the most accurate and robust under most conditions. An error analysis shows that the method`s random errors are dominated by uncertainties in the surface emissivity and instrument noise. Since the SC method is more prone to systematic errors (such as surface emissivity errors caused by rain events), it initially compared poorly to the ground observations. After filtering for rain events, the comparisons improved. Overall, the root mean square (rms) errors ranged from 0.12 to 0.14 kg m{sup {minus}2}, suggesting these methods can provide, at best, three categories of cloud LWP. It is anticipated that the techniques and strategies developed in this study, and prior related studies, to analyze passive microwave data will be requisite for maximizing the information content of future instruments.

  3. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide.

    PubMed

    Yu, Jaehyung; Wagner, Lucas K; Ertekin, Elif

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used. PMID:26671396

  4. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  5. Assumption-free estimation of the genetic contribution to refractive error across childhood

    PubMed Central

    St Pourcain, Beate; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Williams, Cathy

    2015-01-01

    Purpose Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Methods Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). Results The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old. Conclusions Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population. PMID:26019481

  6. Analysis of systematic errors in lateral shearing interferometry for EUV optical testing

    SciTech Connect

    Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.

    2009-02-24

    Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.

  7. Error in Estimation of Rate and Time Inferred from the Early Amniote Fossil Record and Avian Molecular Clocks

    E-print Network

    Hadly, Elizabeth

    Error in Estimation of Rate and Time Inferred from the Early Amniote Fossil Record and Avian estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, fo- cus is primarily on fossil

  8. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 ? error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 ? errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

  9. Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-01-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We validate our estimated smoothing errors by comparing the SBUV ozone profiles with other ozone profiling sensors.

  10. Error estimates of triangular finite elements under a weak angle condition

    NASA Astrophysics Data System (ADS)

    Mao, Shipeng; Shi, Zhongci

    2009-08-01

    In this note, by analyzing the interpolation operator of Girault and Raviart given in [V. Girault, P.A. Raviart, Finite element methods for Navier-Stokes equations, Theory and algorithms, in: Springer Series in Computational Mathematics, Springer-Verlag, Berlin,1986] over triangular meshes, we prove optimal interpolation error estimates for Lagrange triangular finite elements of arbitrary order under the maximal angle condition in a unified and simple way. The key estimate is only an application of the Bramble-Hilbert lemma.

  11. Working with the Brain, Not against It: Correction of Systematic Errors in Subtraction.

    ERIC Educational Resources Information Center

    Baxter, Paul; Dole, Shelly

    1990-01-01

    An experimental study was conducted of 2 different approaches to the correction of consistent subtraction errors in 6 students aged 12-13. Tentative findings demonstrate the superiority of the old way/new way method compared to use of Multibase Arithmetic Blocks and place value charts. (JDD)

  12. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    PubMed Central

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  13. Nuclear power plant fault-diagnosis using neural networks with error estimation

    SciTech Connect

    Kim, K.; Bartlett, E.B.

    1994-12-31

    The assurance of the diagnosis obtained from a nuclear power plant (NPP) fault-diagnostic advisor based on artificial neural networks (ANNs) is essential for the practical implementation of the advisor to fault detection and identification. The objectives of this study are to develop an error estimation technique (EET) for diagnosis validation and apply it to the NPP fault-diagnostic advisor. Diagnosis validation is realized by estimating error bounds on the advisor`s diagnoses. The 22 transients obtained from the Duane Arnold Energy Center (DAEC) training simulator are used for this research. The results show that the NPP fault-diagnostic advisor are effective at producing proper diagnoses on which errors are assessed for validation and verification purposes.

  14. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  15. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently. PMID:22519310

  16. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  17. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  18. Bias and error in estimates of equilibrium free-energy differences from nonequilibrium measurements

    E-print Network

    Gore, Jeff

    Bias and error in estimates of equilibrium free-energy differences from nonequilibrium measurements that allows one to compute the equilibrium free-energy difference F between two states from the probabilityT , [Jarzynski, C. (1997) Phys. Rev. Lett. 87, 2690­2693]. The Jarzynski equality provides a powerful free-energy

  19. An agent-based modelling approach to estimate error in gyrodactylid population growth

    E-print Network

    Wilensky, Uri

    An agent-based modelling approach to estimate error in gyrodactylid population growth Raúl Ramírez: Gyrodactylus salaris Demographic stochasticity Gyrodactylid performance Infection dynamics Population growth- ever, gave unrealistically low population growth rates when used as parameters in the model, and a sche

  20. Error estimations for source inversions in seismology and Rivera, L.(1)

    E-print Network

    Duputel, Zacharie

    Error estimations for source inversions in seismology and geodesy Rivera, L.(1) , Duputel, Z.(1.rivera@unistra.fr (2) DPRI, Kyoto University, Kyoto, Japan email fukahata@rcep.dpri.kyoto-u.ac.jp (3) Seismological used tool. It comes in very diverse flavors depending on the nature of data (e.g. seismological

  1. Speech enhancement using a minimum mean-square error short-time spectral modulation magnitude estimator

    E-print Network

    Speech enhancement using a minimum mean-square error short-time spectral modulation magnitude In this paper we investigate the enhancement of speech by applying MMSE short-time spectral magnitude estimation values that maximise the subjective quality of stimuli enhanced using the MMSE modulation magnitude

  2. Errors in Estimating Raindrop Size Distribution Parameters Employing Disdrometer and Simulated Raindrop Spectra

    E-print Network

    Zhang, Guifu

    1 Errors in Estimating Raindrop Size Distribution Parameters Employing Disdrometer and Simulated evaluated parameters. This study also shows that the discrepancy between the radar and disdrometer ) is the slope parameter, and D (mm) is the equivalent volume diameter. Disdrometers are usually used to measure

  3. Fast and Accurate Packet Delivery Estimation based on DSSS Chip Errors Pirmin Heinzer

    E-print Network

    Lenders, Vincent

    , or (iii) a combination of both. RSSI-based models tend to be fast but researchers have found accurate representation of the channel conditions and have shown to be more accurate than the RSSI basedFast and Accurate Packet Delivery Estimation based on DSSS Chip Errors Pirmin Heinzer ETH Zurich

  4. Error Estimation Techniques to Refine Overlapping Aerial Image Mosaic Processes via Detected Parameters

    ERIC Educational Resources Information Center

    Bond, William Glenn

    2012-01-01

    In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

  5. Estimating MEMS Gyroscope G-Sensitivity Errors in Foot Mounted Navigation

    E-print Network

    Calgary, University of

    Estimating MEMS Gyroscope G-Sensitivity Errors in Foot Mounted Navigation Jared B. Bancroft are often overlooked in foot mounted navigation systems. Accelerations of foot mounted IMUs can reach 5 g-g-sensitivity; linear acceleration effect on gyros; pedestrian navigation; foot mounted sensors; GPS denied navigation

  6. A Posteriori Error Estimate for Front-Tracking for Nonlinear Systems of Conservation Laws

    E-print Network

    A Posteriori Error Estimate for Front-Tracking for Nonlinear Systems of Conservation Laws M-tracking approximate solutions to hyperbolic systems of nonlin- ear conservation laws. Extending the L 1 -stability-tracking approximations for nonlinear conservation laws, u t + f(u) x = 0 ; (1.1) #3; Supported by the Fonds pour la

  7. An error estimate for a finite volume scheme for a diffusion convection problem

    E-print Network

    Herbin, Raphaèle

    An error estimate for a finite volume scheme for a diffusion convection problem on a triangular here a finite volume scheme for a diffusion­convection equation on an open bounded set\\Omega of IR 2 this triangular mesh, a four point finite volume scheme will be defined in section 2 for the numerical solution

  8. Discrete Sobolev Inequalities and L p Error Estimates for Approximate Finite Volume Solutions of

    E-print Network

    Gallouët, Thierry

    diffusion equations by finite volume schemes. 1 Introduction The aim of this work is to study finite volume schemes has only recently been undertaken. Error estimates were first obtained we present the continuous problem. Section 3 is devoted to the finite volume scheme on admissible

  9. Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations

    E-print Network

    Hartmann, Ralf

    Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations Tobias Leichta,b , Ralf Hartmann,a,b aInstitute of Aerodynamics and Flow Technology, DLR (German Aerospace Center-dimensional laminar aerodynamic flow simulations. The optimal order symmetric interior penalty discontinuous Galerkin

  10. A posteriori pointwise error estimation for compressible fluid flows using adjoint parameters and Lagrange remainder

    E-print Network

    Region 141070, Russian Federation b Department of Mathematics and C.S.I.T., Florida State University Navier-Stokes 1. INTRODUCTION At present, the Richardson extrapolation [1, 2, and 3] is the most popular method for estimation of discretization error in CFD. Unfortunately, a correct use of Richardson

  11. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  12. MESH QUALITY CONTROL FOR INDUSTRIAL NAVIERSTOKES PROBLEMS VIA A POSTERIORI ERROR ESTIMATES

    E-print Network

    Bank, Randolph E.

    MESH QUALITY CONTROL FOR INDUSTRIAL NAVIER­STOKES PROBLEMS VIA A POSTERIORI ERROR ESTIMATES R. E criterion. Numerical simulations of viscous flows of industrial interest around aerodynamic shapes Aviation DGT/DEA B.P. 300 92214 St Cloud Cedex France; x Dassault Aviation DGT/DEA B.P. 300 92214 St Cloud

  13. Estimating functionals of the error distribution in parametric and nonparametric regression

    E-print Network

    Wefelmeyer, Wolfgang

    Estimating functionals of the error distribution in parametric and nonparametric regression Ursula for the regression function. In the nonparametric regression model, the function r is unspecified (up to smooth- ness regression model is just the full nonparametric bivariate model, with no structural constraint

  14. Interval Estimation for True Raw and Scale Scores under the Binomial Error Model

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

    2006-01-01

    Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

  15. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  16. ERROR ESTIMATES FOR THE FINITE VOLUME ELEMENT METHOD FOR PARABOLIC EQUATIONS IN CONVEX POLYGONAL

    E-print Network

    Lazarov, Raytcho

    ERROR ESTIMATES FOR THE FINITE VOLUME ELEMENT METHOD FOR PARABOLIC EQUATIONS IN CONVEX POLYGONAL piecewise linear finite volume element method for parabolic equations in a convex polygonal domain as in the corresponding finite element method, and almost optimal away from the corners. We also briefly consider

  17. Error estimation of bathymetric grid models derived from historic and contemporary datasets

    E-print Network

    New Hampshire, University of

    1 Error estimation of bathymetric grid models derived from historic and contemporary datasets and rapidly collecting dense bathymetric datasets. Sextants were replaced by radio navigation, then transit, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign

  18. Pollution error in the h-version of the finite-element method and the local quality of a-posteriori error estimators 

    E-print Network

    Mathur, Anuj

    1994-01-01

    In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...

  19. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2? uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2? uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  20. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    NASA Technical Reports Server (NTRS)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  1. Stacked Weak Lensing Mass Calibration: Estimators, Systematics, and Impact on Cosmological Parameter Constraints

    SciTech Connect

    Rozo, Eduardo; Wu, Hao-Yi; Schmidt, Fabian; /Caltech

    2011-11-04

    When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.

  2. An a-posteriori error estimator for linear elastic fracture mechanics using the stable generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.

    2015-10-01

    In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.

  3. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  4. Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates

    SciTech Connect

    Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

    2009-08-01

    Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

  5. A Lag-1 Smoother Approach to System Error Estimation: Sequential Method

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2014-01-01

    Starting from sequential data assimilation arguments, the present work shows how to use residual statistics from filtering and lag-1 (6-hour) smoothing to infer components of the system (model) error covariance matrix that project onto a dense observing network. The residuals relationships involving the system error covariance matrix are similar to those available to derive background, observation, and analysis error covariance information from filter residual statistics. An illustration of the approach is given for two low-dimensional dynamical systems: a linear damped harmonic oscillator and the nonlinear Lorenz (1995). The application examples consider the important case of evaluating the ability to estimate the model error covariance from residual time series obtained from suboptimal filters and smoothers that assume the model to be perfect. The examples show the residuals to contain the necessary information to allow for such estimation. The examples also illustrate the consequences of estimating covariances through time series of residuals (available in practice) instead of multiple realizations from Monte Carlo sampling. Recast of the sequential approach into the language of variational language appears in a companion article.

  6. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  7. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.

    PubMed

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  8. Sensitivity of Satellite Rainfall Estimates Using a Multidimensional Error Stochastic Model

    NASA Astrophysics Data System (ADS)

    Falck, A. S.; Vila, D. A.; Tomasella, J.

    2011-12-01

    Error propagation models of satellite precipitation fields are a key element in the response and performance of hydrological models, which depend on the reliability and availability of rainfall data. However, most of these models treat the error as an unidimensional measurement, with no consideration of the type of process involved. The limitations of unidimensional error propagation models were overcome by multidimensional error propagation stochastic models. In this study, the SREM2D (A Two-Dimensional Satellite Rainfall Error Model) was used to simulate satellite precipitation fields by inverse calibration parameters based on real data called "reference", in this case, gauge rainfall data. Sensitivity of satellite rainfall estimates from different satellite-based algorithms were investigated to be used for hydrologic simulation over the Tocantins basin, a transition area between the Amazon basin and the relative drier northeast region, using the SREM2D error propagation model. Preliminary results show that SREM2D has the potential to generate realistic ensembles of satellite rain fields to feed hydrologic models. Ongoing research is focused on the impact of rainfall ensembles simulated by SREM2D for the hydrologic modeling using the Model of Large Basin of the National Institute For Space Research (MGB-INPE) developed for Brazilian basins.

  9. Estimated Cost Savings from Reducing Errors in the Preparation of Sterile Doses of Medications

    PubMed Central

    Schneider, Philip J.

    2014-01-01

    Abstract Background: Preventing intravenous (IV) preparation errors will improve patient safety and reduce costs by an unknown amount. Objective: To estimate the financial benefit of robotic preparation of sterile medication doses compared to traditional manual preparation techniques. Methods: A probability pathway model based on published rates of errors in the preparation of sterile doses of medications was developed. Literature reports of adverse events were used to project the array of medical outcomes that might result from these errors. These parameters were used as inputs to a customized simulation model that generated a distribution of possible outcomes, their probability, and associated costs. Results: By varying the important parameters across ranges found in published studies, the simulation model produced a range of outcomes for all likely possibilities. Thus it provided a reliable projection of the errors avoided and the cost savings of an automated sterile preparation technology. The average of 1,000 simulations resulted in the prevention of 5,420 medication errors and associated savings of $288,350 per year. The simulation results can be narrowed to specific scenarios by fixing model parameters that are known and allowing the unknown parameters to range across values found in previously published studies. Conclusions: The use of a robotic device can reduce health care costs by preventing errors that can cause adverse drug events. PMID:25477598

  10. Estimating the Standard Error of the Maximum Likelihood Ability Estimator in Adaptive Testing Using the Posterior-Weighted Test Information Function

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2007-01-01

    The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…

  11. Contribution to the Nonparametric Estimation of the Density of the Regression Errors (Doctoral Thesis)

    E-print Network

    LSTA, Rawane Samb

    2010-01-01

    This thesis deals with the nonparametric estimation of density f of the regression error term E of the model Y=m(X)+E, assuming its independence with the covariate X. The difficulty linked to this study is the fact that the regression error E is not observed. In a such setup, it would be unwise, for estimating f, to use a conditional approach based upon the probability distribution function of Y given X. Indeed, this approach is affected by the curse of dimensionality, so that the resulting estimator of the residual term E would have considerably a slow rate of convergence if the dimension of X is very high. Two approaches are proposed in this thesis to avoid the curse of dimensionality. The first approach uses the estimated residuals, while the second integrates a nonparametric conditional density estimator of Y given X. If proceeding so can circumvent the curse of dimensionality, a challenging issue is to evaluate the impact of the estimated residuals on the final estimator of the density f. We will also at...

  12. A variational method for finite element stress recovery and error estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Macy, S. C.

    1993-01-01

    A variational method for obtaining smoothed stresses from a finite element derived nonsmooth stress field is presented. The method is based on minimizing a functional involving discrete least-squares error plus a penalty constraint that ensures smoothness of the stress field. An equivalent accuracy criterion is developed for the smoothing analysis which results in a C sup 1-continuous smoothed stress field possessing the same order of accuracy as that found at the superconvergent optimal stress points of the original finite element analysis. Application of the smoothing analysis to residual error estimation is also demonstrated.

  13. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  14. Mass load estimation errors utilizing grab sampling strategies in a karst watershed

    USGS Publications Warehouse

    Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

    2003-01-01

    Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

  15. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    PubMed Central

    Nash, Ulrik W.

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  16. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    PubMed

    Nash, Ulrik W

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  17. Estimates of ocean forecast error covariance derived from Hessian Singular Vectors

    NASA Astrophysics Data System (ADS)

    Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.

    2015-05-01

    Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual realized forecast error at the largest scales. Forecast error growth via the HSVs was found to be significantly influenced by the non-normal character of the underlying forecast circulation, and is accompanied by a forward energy cascade, suggesting that forecast errors could be effectively controlled by reducing the error at the largest scales in the forecast initial conditions. A predictive relation for the amplitude of the basin integrated forecast error in terms of the mean aspect ratio of the forecast error hyperellipse (quantified in terms of the mean eccentricity) was also identified which could prove useful for predicting the level of forecast error a priori. All of these findings were found to be insensitive to the configuration of the 4D-Var data assimilation system and the resolution of the observing network.

  18. Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2016-01-01

    Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.

  19. Forest canopy height estimation using ICESat/GLAS data and error factor analysis in Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Hayashi, Masato; Saigusa, Nobuko; Oguma, Hiroyuki; Yamagata, Yoshiki

    2013-07-01

    Spaceborne light detection and ranging (LiDAR) enables us to obtain information about vertical forest structure directly, and it has often been used to measure forest canopy height or above-ground biomass. However, little attention has been given to comparisons of the accuracy of the different estimation methods of canopy height or to the evaluation of the error factors in canopy height estimation. In this study, we tested three methods of estimating canopy height using the Geoscience Laser Altimeter System (GLAS) onboard NASA's Ice, Cloud, and land Elevation Satellite (ICESat), and evaluated several factors that affected accuracy. Our study areas were Tomakomai and Kushiro, two forested areas on Hokkaido in Japan. The accuracy of the canopy height estimates was verified by ground-based measurements. We also conducted a multivariate analysis using quantification theory type I (multiple-regression analysis of qualitative data) and identified the observation conditions that had a large influence on estimation accuracy. The method using the digital elevation model was the most accurate, with a root-mean-square error (RMSE) of 3.2 m. However, GLAS data with a low signal-to-noise ratio (?10.0) and that taken from September to October 2009 had to be excluded from the analysis because the estimation accuracy of canopy height was remarkably low. After these data were excluded, the multivariate analysis showed that surface slope had the greatest effect on estimation accuracy, and the accuracy dropped the most in steeply sloped areas. We developed a second model with two equations to estimate canopy height depending on the surface slope, which improved estimation accuracy (RMSE = 2.8 m). These results should prove useful and provide practical suggestions for estimating forest canopy height using spaceborne LiDAR.

  20. SU-E-T-405: Robustness of Volumetric-Modulated Arc Therapy (VMAT) Plans to Systematic MLC Positional Errors

    SciTech Connect

    Qi, P; Xia, P

    2014-06-01

    Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.

  1. Error estimation for moment analysis in heavy-ion collision experiment

    NASA Astrophysics Data System (ADS)

    Luo, Xiaofeng

    2012-02-01

    Higher moments of conserved quantities are predicted to be sensitive to the correlation length and connected to the thermodynamic susceptibility. Thus, higher moments of net-baryon, net-charge and net-strangeness have been extensively studied theoretically and experimentally to explore phase structure and bulk properties of QCD matters created in a heavy-ion collision experiment. As the higher moment analysis is a statistic hungry study, the error estimation is crucial to extract physics information from the limited experimental data. In this paper, we will derive the limit distributions and error formula based on the delta theorem in statistics for various order moments used in the experimental data analysis. The Monte Carlo simulation is also applied to test the error formula.

  2. Error estimation for localized signal properties: application to atmospheric mixing height retrievals

    NASA Astrophysics Data System (ADS)

    Biavati, G.; Feist, D. G.; Gerbig, C.; Kretschmer, R.

    2015-10-01

    The mixing height is a key parameter for many applications that relate surface-atmosphere exchange fluxes to atmospheric mixing ratios, e.g., in atmospheric transport modeling of pollutants. The mixing height can be estimated with various methods: profile measurements from radiosondes as well as remote sensing (e.g., optical backscatter measurements). For quantitative applications, it is important to estimate not only the mixing height itself but also the uncertainty associated with this estimate. However, classical error propagation typically fails on mixing height estimates that use thresholds in vertical profiles of some measured or measurement-derived quantity. Therefore, we propose a method to estimate the uncertainty of an estimation of the mixing height. The uncertainty we calculate is related not to the physics of the boundary layer (e.g., entrainment zone thickness) but to the quality of the analyzed signals. The method relies on the concept of statistical confidence and on the knowledge of the measurement errors. It can also be applied to problems outside atmospheric mixing height retrievals where properties have to be assigned to a specific position, e.g., the location of a local extreme.

  3. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  4. Estimation of Local Energy Norms of Modeling Error in Multi-Scale Modeling of Linearly Elastic Heterogeneous Solids

    E-print Network

    Carter, Jason Aaron

    2011-12-31

    -scale failure mechanisms. Eventually, however they could lead to structural failure, i.e. on the macro-scale. This thesis adds to the modeling error estimation ?eld by introducing an estimate of the modeling error in terms of a nonlinear quantity of interest...

  5. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding. PMID:24692025

  6. A graphical analysis of the systematic error of classical binned methods in constructing luminosity functions

    NASA Astrophysics Data System (ADS)

    Yuan, Zunli; Wang, Jiancheng

    2013-06-01

    The classical 1/ V a and PC methods of constructing binned luminosity functions (LFs) are revisited and compared by graphical analysis. Using both theoretical analysis and illustration with an example, we show why the two methods give different results for the bins which are crossed by the flux limit curves L= L lim ( z). Based on a combined sample simulated by a Monte Carlo method, the estimate ? of two methods are compared with the input model LFs. The two methods give identical and ideal estimate for the high luminosity points of each redshift interval. However, for the low luminosity bins of all the redshift intervals both methods give smaller estimate than the input model. We conclude that once the LF is evolving with redshift, the classical binned methods will unlikely give an ideal estimate over the total luminosity range. Page & Carrera (in Mon. Not. R. Astron. Soc. 311:433, 2000) noticed that for objects close to the flux limit ?_{1/Va} nearly always to be too small. We believe this is due to the arbitrary choosing of redshift and luminosity intervals. Because ?_{1/Va} is more sensitive to how the binning are chosen than ? PC . We suggest a new binning method, which can improve the LFs produced by the 1/ V a method significantly, and also improve the LFs produced by the PC methods. Our simulations show that after adopting this new binning, both the 1/ V a and PC methods have comparable results.

  7. Weiss-Weinstein Family of Error Bounds for Quantum Parameter Estimation

    E-print Network

    Lu, Xiao-Ming

    2015-01-01

    To approach the fundamental limits on the estimation precision for random parameters in quantum systems, we propose a quantum version of the Weiss-Weinstein family of lower bounds on estimation errors. The quantum Weiss-Weinstein bounds (QWWB) include the popular quantum Cram\\'er-Rao bound (QCRB) as a special case, and do not require the differentiability of prior distributions and conditional quantum states as the QCRB does; thus, the QWWB is a superior alternative for the QCRB. We show that the QWWB well captures the insurmountable error caused by the ambiguity of the phase in quantum states, which cannot be revealed by the QCRB. Furthermore, we use the QWWB to expose the possible shortcomings of the QCRB when the number of independent and identically distributed systems is not sufficiently large.

  8. Error estimation and adaptive order nodal method for solving multidimensional transport problems

    SciTech Connect

    Zamonsky, O.M.; Gho, C.J.; Azmy, Y.Y.

    1998-01-01

    The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.

  9. Estimation of random errors for lidar based on noise scale factor

    NASA Astrophysics Data System (ADS)

    Wang, Huan-Xue; Liu, Jian-Guo; Zhang, Tian-Shu

    2015-08-01

    Estimation of random errors, which are due to shot noise of photomultiplier tube (PMT) or avalanche photodiode (APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship, noise scale factor (NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable. Project supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB05040300) and the National Natural Science Foundation of China (Grant No. 41205119).

  10. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  11. Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics 

    E-print Network

    Zollanvari, Amin

    2012-02-14

    , Aniruddha Datta Guy L. Curry Head of Department, Costas N. Georghiades December 2010 Major Subject: Electrical Engineering iii ABSTRACT Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133 x LIST OF TABLES TABLE Page I Minimum sample size, n, (n0 = n1 = n) for desired (n;0:5) in univariate case. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 II Genes selected using the validity-goodness model selection...

  12. Filtering Error Estimates and Order of Accuracy via the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2011-02-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise. The concept of the order of accuracy of a filter is introduced and used as an organizing principle to compare the accuracy of different filters.

  13. Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling Framework 

    E-print Network

    Dalton, Lori Anne

    2012-07-16

    of classifier error estimation, exploiting both the assumed model and observed data. Important applications in- clude, but are not limited to, cancer diagnosis and any small-sample classification problem. v To my parents and brother vi ACKNOWLEDGMENTS.... Robustness to False Circular Gaussian Modeling Assumptions . . . . . . . . . . . . . . . . . . . . . . . 77 3. Robustness to False Gaussian Modeling Assumptions . 80 4. Performance on Real Breast Cancer Data . . . . . . . 89 5. Average Performance Using...

  14. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    PubMed Central

    Pekünlü, Ekim; Özsu, ?lbilge

    2014-01-01

    There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS) testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ? n ? 8) of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]). When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93–0.98) and coefficients of variation (3.4–7.0%). The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001) and higher (p < 0.001), respectively, than those obtained using other methods. The Bland-Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9–10 using a relatively large sample size (i.e., ? 24) could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors. PMID:25414753

  15. Estimation of Aperture Errors with Direct Interferometer-Output Feedback for Spacecraft Formation Control

    NASA Technical Reports Server (NTRS)

    Lu, Hui-Ling; Cheng, Victor H. L.; Leitner, Jesse A.; Carpenter, Kenneth G.

    2004-01-01

    Long-baseline space interferometers involving formation flying of multiple spacecraft hold great promise as future space missions for high-resolution imagery. The major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to control these spacecraft and their optics payloads in the specified configuration accurately. In this paper, we describe our effort toward fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present an estimation procedure that effectively extracts relative x/y translational exit pupil aperture deviations from the raw interferometric image with small estimation errors.

  16. Estimate of procession and polar motion errors from planetary encounter station location solutions

    NASA Technical Reports Server (NTRS)

    Pease, G. E.

    1978-01-01

    Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.

  17. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  18. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    NASA Astrophysics Data System (ADS)

    Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

    2012-06-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

  19. COMPARISON OF VARIANCE ESTIMATORS OF THE HORVITZ-THOMPSON ESTIMATOR FOR RANDOMIZED VARIABLE PROBABILITY SYSTEMATIC SAMPLING

    EPA Science Inventory

    Two large-scale environmental surveys, the National Stream Survey (NSS) and the Environmental Protection Agency's proposed Environmental Monitoring and Assessment Program (EMAP), motivated investigation of estimators of the variance of the Horvitz-Thompson estimator under variabl...

  20. Volumetric apparatus for hydrogen adsorption and diffusion measurements: Sources of systematic error and impact of their experimental resolutions

    SciTech Connect

    Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende , Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende

    2013-10-15

    The development of a volumetric apparatus (also known as a Sieverts’ apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  2. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGESBeta

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore »we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  3. Height Estimation and Error Assessment of Inland Water Level Time Series calculated by a Kalman Filter Approach using Multi-Mission Satellite Altimetry

    NASA Astrophysics Data System (ADS)

    Schwatke, Christian; Dettmering, Denise; Boergens, Eva

    2015-04-01

    Originally designed for open ocean applications, satellite radar altimetry can also contribute promising results over inland waters. Its measurements help to understand the water cycle of the system earth and makes altimetry to a very useful instrument for hydrology. In this paper, we present our methodology for estimating water level time series over lakes, rivers, reservoirs, and wetlands. Furthermore, the error estimation of the resulting water level time series is demonstrated. For computing the water level time series multi-mission satellite altimetry data is used. The estimation is based on altimeter data from Topex, Jason-1, Jason-2, Geosat, IceSAT, GFO, ERS-2, Envisat, Cryosat, HY-2A, and Saral/Altika - depending on the location of the water body. According to the extent of the investigated water body 1Hz, high-frequent or retracked altimeter measurements can be used. Classification methods such as Support Vector Machine (SVM) and Support Vector Regression (SVR) are applied for the classification of altimeter waveforms and for rejecting outliers. For estimating the water levels we use a Kalman filter approach applied to the grid nodes of a hexagonal grid covering the water body of interest. After applying an error limit on the resulting water level heights of each grid node, a weighted average water level per point of time is derived referring to one reference location. For the estimation of water level height accuracies, at first, the formal errors are computed applying a full error propagation within Kalman filtering. Hereby, the precision of the input measurements are introduced by using the standard deviation of the water level height along the altimeter track. In addition to the resulting formal errors of water level heights, uncertainties of the applied geophysical correction (e.g. wet troposphere, ionosphere, etc.) and systematic error effects are taken into account to achieve more realistic error estimates. For validation of the time series, we compare our results with gauges and external inland altimeter databases (e.g. Hydroweb). We yield very high correlations between absolute water level height time series from altimetry and gauges. Moreover, the comparisons of water level heights are also used for the validation of the error assessment. More than 200 water level time series were already computed and made public available via the "Database for Hydrological Time Series of Inland Waters" (DAHITI) which is available via http://dahiti.dgfi.tum.de .

  4. Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle tracking method

    NASA Astrophysics Data System (ADS)

    Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru

    2015-07-01

    High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.

  5. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays

    USGS Publications Warehouse

    Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

    2007-01-01

    When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

  6. Mean square displacements with error estimates from non-equidistant time-step kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-06-01

    We present a method to calculate mean square displacements (MSD) with error estimates from kinetic Monte Carlo (KMC) simulations of diffusion processes with non-equidistant time-steps. An analytical solution for estimating the errors is presented for the special case of one moving particle at fixed rate constant. The method is generalized to an efficient computational algorithm that can handle any number of moving particles or different rates in the simulated system. We show with examples that the proposed method gives the correct statistical error when the MSD curve describes pure Brownian motion and can otherwise be used as an upper bound for the true error.

  7. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  8. Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

    NASA Technical Reports Server (NTRS)

    Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

    2013-01-01

    Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

  9. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  10. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  11. Moving Taylor Bayesian Regression for nonparametric multidimensional function estimation with possibly correlated errors

    E-print Network

    Heitzig, Jobst

    2012-01-01

    We present a nonparametric method for estimating the value and several derivatives of an unknown, sufficiently smooth real-valued function of real-valued arguments from a finite sample of points, where both the function arguments and the corresponding values are known only up to measurement errors having some assumed distribution and correlation structure. The method, Moving Taylor Bayesian Regression (MOTABAR), uses Bayesian updating to find the posterior mean of the coefficients of a Taylor polynomial of the function at a moving position of interest. When measurement errors are neglected, MOTABAR becomes a multivariate interpolation method. It contains several well-known regression and interpolation methods as special or limit cases. We demonstrate the performance of MOTABAR using the reconstruction of the Lorenz attractor from noisy observations as an example.

  12. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (principal investigators)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  13. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  14. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg?1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  15. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error

    PubMed Central

    Chang, Howard H.; Peng, Roger D.; Dominici, Francesca

    2011-01-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 ?m. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases. PMID:21297159

  16. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects.

    PubMed

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S; Barrett, Brendan T; McGraw, Paul V

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg(-1) at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  17. Adaptive flux-based nodeless variable finite element formulation with error estimation for thermal-structural analysis

    NASA Astrophysics Data System (ADS)

    Traivivatana, S.; Phongthanapanich, S.; Dechaumphai, P.

    2015-09-01

    A posteriori error estimation for the nodeless variable finite element method is presented. A nodeless variable finite element method using flux-based formulation is developed and combined with an adaptive meshing technique to analyze two-dimensional thermal-structural problems. The continuous flux and stresses are determined by using the flux-based formulation while the standard linear element interpolation functions are used to determine the discontinuous flux and stresses. To measure the global error, the L2 norm error is selected to find the root-mean-square error over the entire domain. The finite element formulation and its detailed finite element matrices are presented. Accuracy of the estimated error is measured by the percentage relative error. An adaptive meshing technique, that can generate meshes corresponding to solution behaviors automatically, is implemented to further improve the solution accuracy. Several examples are presented to evaluate the performance and accuracy of the combined method.

  18. On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.

    2001-05-01

    There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.

  19. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  20. Laboratory measurement error in external dose estimates and its effects on dose-response analyses of Hanford worker mortality data

    SciTech Connect

    Gilbert, E.S.; Fix, J.J.

    1996-08-01

    This report addresses laboratory measurement error in estimates of external doses obtained from personnel dosimeters, and investigates the effects of these errors on linear dose-response analyses of data from epidemiologic studies of nuclear workers. These errors have the distinguishing feature that they are independent across time and across workers. Although the calculations made for this report were based on Hanford data, the overall conclusions are likely to be relevant for other epidemiologic studies of workers exposed to external radiation.

  1. Power-spectrum analysis of Super-Kamiokande solar neutrino data, taking into account asymmetry in the error estimates

    E-print Network

    Sturrock, P A

    2006-01-01

    The purpose of this article is to carry out a power-spectrum analysis (based on likelihood methods) of the Super-Kamiokande 5-day dataset that takes account of the asymmetry in the error estimates. Whereas the likelihood analysis involves a linear optimization procedure for symmetrical error estimates, it involves a nonlinear optimization procedure for asymmetrical error estimates. We find that for most frequencies there is little difference between the power spectra derived from analyses of symmetrized error estimates and from asymmetrical error estimates. However, this proves not to be the case for the principal peak in the power spectra, which is found at 9.43 yr-1. A likelihood analysis which allows for a "floating offset" and takes account of the start time and end time of each bin and of the flux estimate and the symmetrized error estimate leads to a power of 11.24 for this peak. A Monte Carlo analysis shows that there is a chance of only 1% of finding a peak this big or bigger in the frequency band 1 -...

  2. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  3. Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

  4. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    ERIC Educational Resources Information Center

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  5. Error in estimation of rate and time inferred from the early amniote fossil record and avian molecular clocks.

    PubMed

    van Tuinen, Marcel; Hadly, Elizabeth A

    2004-08-01

    The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors in the synapsid-diapsid calibration illustrates that although some error can derive from geological dating of sedimentary rocks, the absence of good stem fossils makes phylogenetic error the most critical. We therefore propose the use of calibration ages that are based on the first undisputed synapsid and diapsid. This approach yields minimum age estimates and standard errors of 306.1 +/- 8.5 MYR for the divergence leading to birds and mammals. Because this upper bound overlaps with the recent use of 310 MYR, we do not support the notion that several metazoan divergence times are significantly overestimated because of serious miscalibration (sensuLee 1999). However, the propagation of relevant errors reduces the statistical significance of the pre-K-T boundary diversification of many bird lineages despite retaining similar point time estimates. Our results demand renewed investigation into suitable loci and fossil calibrations for constructing evolutionary timescales. PMID:15486700

  6. Standard error estimation using the EM algorithm for the joint modeling of survival and longitudinal data.

    PubMed

    Xu, Cong; Baines, Paul D; Wang, Jane-Ling

    2014-10-01

    Joint modeling of survival and longitudinal data has been studied extensively in the recent literature. The likelihood approach is one of the most popular estimation methods employed within the joint modeling framework. Typically, the parameters are estimated using maximum likelihood, with computation performed by the expectation maximization (EM) algorithm. However, one drawback of this approach is that standard error (SE) estimates are not automatically produced when using the EM algorithm. Many different procedures have been proposed to obtain the asymptotic covariance matrix for the parameters when the number of parameters is typically small. In the joint modeling context, however, there may be an infinite-dimensional parameter, the baseline hazard function, which greatly complicates the problem, so that the existing methods cannot be readily applied. The profile likelihood and the bootstrap methods overcome the difficulty to some extent; however, they can be computationally intensive. In this paper, we propose two new methods for SE estimation using the EM algorithm that allow for more efficient computation of the SE of a subset of parametric components in a semiparametric or high-dimensional parametric model. The precision and computation time are evaluated through a thorough simulation study. We conclude with an application of our SE estimation method to analyze an HIV clinical trial dataset. PMID:24771699

  7. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    SciTech Connect

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  8. Power-spectrum analysis of Super-Kamiokande solar neutrino data, taking into account asymmetry in the error estimates

    E-print Network

    P. A. Sturrock; J. D. Scargle

    2006-06-20

    The purpose of this article is to carry out a power-spectrum analysis (based on likelihood methods) of the Super-Kamiokande 5-day dataset that takes account of the asymmetry in the error estimates. Whereas the likelihood analysis involves a linear optimization procedure for symmetrical error estimates, it involves a nonlinear optimization procedure for asymmetrical error estimates. We find that for most frequencies there is little difference between the power spectra derived from analyses of symmetrized error estimates and from asymmetrical error estimates. However, this proves not to be the case for the principal peak in the power spectra, which is found at 9.43 yr-1. A likelihood analysis which allows for a "floating offset" and takes account of the start time and end time of each bin and of the flux estimate and the symmetrized error estimate leads to a power of 11.24 for this peak. A Monte Carlo analysis shows that there is a chance of only 1% of finding a peak this big or bigger in the frequency band 1 - 36 yr-1 (the widest band that avoids artificial peaks). On the other hand, an analysis that takes account of the error asymmetry leads to a peak with power 13.24 at that frequency. A Monte Carlo analysis shows that there is a chance of only 0.1% of finding a peak this big or bigger in that frequency band 1 - 36 yr-1. From this perspective, power spectrum analysis that takes account of asymmetry of the error estimates gives evidence for variability that is significant at the 99.9% level. We comment briefly on an apparent discrepancy between power spectrum analyses of the Super-Kamiokande and SNO solar neutrino experiments.

  9. Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

    2012-12-01

    Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

  10. An online model correction method based on an inverse problem: Part I—Model error estimation by iteration

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-10-01

    Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.

  11. INCIDENCE OF MULTIDRUG-RESISTANT TUBERCULOSIS DISEASE IN CHILDREN: SYSTEMATIC REVIEW AND GLOBAL ESTIMATES

    PubMed Central

    Jenkins, Helen E.; Tolman, Arielle W.; Yuen, Courtney M.; Parr, Jonathan B.; Keshavjee, Salmaan; Pérez-Vélez, Carlos M.; Pagano, Marcello; Becerra, Mercedes C.; Cohen, Ted

    2014-01-01

    Background Multidrug-resistant tuberculosis (MDR-TB) threatens to reverse recent reductions in global tuberculosis (TB) incidence. Although children under 15 years of age constitute >25% of the worldwide population, the global incidence of MDR-TB disease in children has never been quantified. Methods Our approach for estimating regional and global annual incidence of MDR-TB in children required development of two models: one to estimate the setting-specific risk of MDR-TB among child TB cases, and a second to estimate the setting-specific incidence of TB disease in children. The model for MDR-TB risk among children with TB required a systematic literature review. We multiplied the setting-specific estimates of MDR-TB risk and TB incidence to estimate regional and global incidence of MDR-TB disease in children in 2010. Findings We identified 3,403 papers, of which 97 studies met inclusion criteria for the systematic review of MDR-TB risk. Thirty-one studies reported the risk of MDR-TB among both children and treatment-naïve adults with TB and were used for evaluating the linear association between MDR-TB risk in these two patient groups. We found that the setting-specific risk of MDR-TB was nearly identical in children and treatment-naïve adults with TB, consistent with the assertion that MDR-TB in both groups reflects the local risk of transmitted MDR-TB. Applying these calculated risks, we estimated that around 1,000,000 (95% Confidence Interval: 938,000 – 1,055,000) children developed TB disease in 2010, among whom 32,000 (95% Confidence Interval: 26,000 – 39,000) had MDR-TB. Interpretation Our estimates highlight a massive detection gap for children with TB and MDR-TB disease. Future estimates can be refined as more and better TB data and new diagnostic tools become available. PMID:24671080

  12. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    SciTech Connect

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  13. Estimation of cloud height using ground-based stereophotography: methods, error analysis, and validation

    NASA Astrophysics Data System (ADS)

    Andreev, Maksim S.; Chulichkov, Alexey I.; Emilenko, Aleksander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2014-11-01

    Retrieval errors of the atmospheric composition using optical methods (DOAS et al.) are under the determining influence of the cloudiness during the measurements. If there is information about the clouds, the optical model of the atmosphere used to interpret the measurements can be adjusted, and the retrieval errors are reduced. For the reconstruction of the parameters of clouds a method was developed based on taking pictures of the sky by a pair of digital photocameras and subsequent processing of the obtained sequence of stereo frames by a method of morphological analysis of images. Since the directions of the optical axes of the cameras are not exactly known, the calibration of the direction of sight of the cameras was conducted at the first stage using the photographs of the stars in the night sky. At the second stage, the relative shift of the image of the cloud fragment on the second frame of the pair was calculated. Stereo pairs obtained by simultaneous photography, allowed us to estimate the height of cloud. The report describes a mathematical model of measurement, pose and solve the problem of calibration of direction of sight of the cameras, describes methods of combining of image fragments by morphological method, the problem of estimating cloud height and speed of their movement is formulated and solved. The examples of evaluations in a real photo are analyzed and validated by the way of comparing with the results of measurement by laser rangefinder.

  14. Application of parameter estimation to aircraft stability and control: The output-error approach

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.; Iliff, Kenneth W.

    1986-01-01

    The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.

  15. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope’s Random Error

    PubMed Central

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods—quantile, empirical characteristic function (ECF) and logarithmic moment method—are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope’s random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope’s random error. PMID:26230698

  16. Measuring Coverage in MNCH: Total Survey Error and the Interpretation of Intervention Coverage Estimates from Household Surveys

    PubMed Central

    Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331

  17. Methods used to estimate the size of the owned cat and dog population: a systematic review

    PubMed Central

    2013-01-01

    Background There are a number of different methods that can be used when estimating the size of the owned cat and dog population in a region, leading to varying population estimates. The aim of this study was to conduct a systematic review to evaluate the methods that have been used for estimating the sizes of owned cat and dog populations and to assess the biases associated with those methods. A comprehensive, systematic search of seven electronic bibliographic databases and the Google search engine was carried out using a range of different search terms for cats, dogs and population. The inclusion criteria were that the studies had involved owned or pet domestic dogs and/or cats, provided an estimate of the size of the owned dog or cat population, collected raw data on dog and cat ownership, and analysed primary data. Data relating to study methodology were extracted and assessed for biases. Results Seven papers were included in the final analysis. Collection methods used to select participants in the included studies were: mailed surveys using a commercial list of contacts, door to door surveys, random digit dialled telephone surveys, and randomised telephone surveys using a commercial list of numbers. Analytical and statistical methods used to estimate the pet population size were: mean number of dogs/cats per household multiplied by the number of households in an area, human density multiplied by number of dogs per human, and calculations using predictors of pet ownership. Conclusion The main biases of the studies included selection bias, non-response bias, measurement bias and biases associated with length of sampling time. Careful design and planning of studies is a necessity before executing a study to estimate pet populations. PMID:23777563

  18. State Estimation for Force-Controlled Humanoid Balance using Simple Models in the Presence of Modeling Error

    E-print Network

    State Estimation for Force-Controlled Humanoid Balance using Simple Models in the Presence of modeling error and unknown forces on state estimation for dynamic balancing humanoid robots, that can comply to disturbances but require reactive balance control. Balance for force-controlled robots

  19. Towards estimating fiducial localization error of point-based registration in image-guided neurosurgery.

    PubMed

    Zhi, Deng

    2015-08-17

    Fiducial Localization Error (FLE) is one of the major reasons of inaccuracy in point-based spatial registration of Image-Guided Neurosurgery System (IGNS), and minimizing FLE is the fundamental way to improve spatial registration accuracy. A reliable estimation of FLE is needed, as it cannot be measured directly in real application of IGNS. In this paper, we propose a method to estimate the FLE in a point-based registration of IGNS. Test fiducial point sets were generated in one coordinate system around the given fiducial point set by utilizing simple random sampling. Further, these points were registered to the fiducial point set in the other coordinate system. The average position of the test fiducial point sets with small FRE are calculated and its displacement from the given fiducial point set is the parameter used to estimate the FLE of each fiducial point. The correlation between the displacement and the FLE of each fiducial point is greater than 0.75 when nine or more fiducial points were utilized. This correlation gradually increases up to 0.9 with the increase of the number of fiducial points. PMID:26406096

  20. Statistical Analysis of 2D-Video Disdrometer Measurements and Errors of Polarimetric Rainfall Estimators

    NASA Astrophysics Data System (ADS)

    Kalogiros, J.; Anagnostou, M.; Anagnostou, E.; Papadopoulos, A.

    2009-04-01

    Rainfall characteristics in the area of Athens, Greece, for the time period of the rainy season of 2007 - 2008 using a 2-D video disdrometer are presented. The system operates in continuous basis at open area in the suburbs of Athens close to the sea. In addition to the typical measurements of the system like fall velocity, horizontal velocity, diameter and oblateness of each particle canting angle is estimated from an analysis of recorded shape of the particles. Based on the drop size distribution the recorded rain events are classified basically as rain from stratiform or convective clouds. Statistical analysis of rainfall characteristics for the two rain categories is presented. The disdrometer measurements, which make a complete description of droplets size, shape and orientation, were used to construct rainfall estimators for polarimetric radar reflectivity and specific differential phase measurements using the T-matrix simulation approach. The errors of the estimators for the two different classes of rain events are related to average rainfall characteristics of each class.

  1. Parameter Estimation for Differential Equation Models Using a Framework of Measurement Error in Regression Models.

    PubMed

    Liang, Hua; Wu, Hulin

    2008-12-01

    Differential equation (DE) models are widely used in many scientific fields that include engineering, physics and biomedical sciences. The so-called "forward problem", the problem of simulations and predictions of state variables for given parameter values in the DE models, has been extensively studied by mathematicians, physicists, engineers and other scientists. However, the "inverse problem", the problem of parameter estimation based on the measurements of output variables, has not been well explored using modern statistical methods, although some least squares-based approaches have been proposed and studied. In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least squares (PsLS) principle under a framework of measurement error in regression models. The asymptotic properties of the proposed PsLS estimator are established. We also compare the PsLS method to the corresponding SIMEX method and evaluate their finite sample performances via simulation studies. We illustrate the proposed approach using an application example from an HIV dynamic study. PMID:19956350

  2. Error estimates of elastic components in stress-dependent VTI media

    NASA Astrophysics Data System (ADS)

    Spikes, Kyle T.

    2014-09-01

    This work examines the ranges of physically acceptable elastic components for a vertical transversely isotropic (VTI) laboratory shale data set. A stochastic rock-physics approach combined with physically based acceptance and rejection criteria determined the ranges. The importance of this work is to demonstrate that multiple constrained models explain independently calculated measurement error bars. The data set consisted of pressure- and directional-dependent velocity measurements conducted on a low porosity, brine-saturated hard shale. Error bars were calculated for all five elastic stiffnesses and compliances as a function of pressure. The rock physics model is pressure dependent and represents simultaneously five elastic compliances for a VTI medium. A non-linear least squares fitting routine established a best-fit model to the five compliances at all pressures. Perturbations of the best-fit model provided the statistical parameter space. Twelve physical constraints or data-set-specific conditions comprised the acceptance/rejection criteria. These constraints and conditions included strain-energy requirements, inequalities among stiffnesses and anisotropy parameters, and rates of change of moduli with pressure. The largest number of rejected models resulted from violating a criterion relating a compressional and shear stiffness. Minimum misfits between the accepted models and the data illustrate that a fraction of the accepted models best explain the data. The misfits between these accepted models and data explain the error in the data and/or inhomogeneities at the measurement scale. The ranges of acceptable elastic component values and the corresponding uncertainty estimates could be incorporated into seismic-inversion, imaging, and velocity-modeling schemes.

  3. Estimation of immunization providers' activities cost, medication cost, and immunization dose errors cost in Iraq.

    PubMed

    Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y

    2012-06-01

    The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors. PMID:22521848

  4. Population Estimation Methods for Free-Ranging Dogs: A Systematic Review

    PubMed Central

    Belo, Vinícius Silva; Werneck, Guilherme Loureiro; da Silva, Eduardo Sérgio; Barbosa, David Soeiro; Struchiner, Claudio José

    2015-01-01

    The understanding of the structure of free-roaming dog populations is of extreme importance for the planning and monitoring of populational control strategies and animal welfare. The methods used to estimate the abundance of this group of dogs are more complex than the ones used with domiciled owned dogs. In this systematic review, we analyze the techniques and the results obtained in studies that seek to estimate the size of free-ranging dog populations. Twenty-six studies were reviewed regarding the quality of execution and their capacity to generate valid estimates. Seven of the eight publications that take a simple count of the animal population did not consider the different probabilities of animal detection; only one study used methods based on distances; twelve relied on capture-recapture models for closed populations without considering heterogeneities in capture probabilities; six studies applied their own methods with different potential and limitations. Potential sources of bias in the studies were related to the inadequate description or implementation of animal capturing or viewing procedures and to inadequacies in the identification and registration of dogs. Thus, there was a predominance of estimates with low validity. Abundance and density estimates carried high variability, and all studies identified a greater number of male dogs. We point to enhancements necessary for the implementation of future studies and to potential updates and revisions to the recommendations of the World Health Organization with respect to the estimation of free-ranging dog populations. PMID:26673165

  5. Age estimation in forensic anthropology: quantification of observer error in phase versus component-based methods.

    PubMed

    Shirley, Natalie R; Ramirez Montes, Paula Andrea

    2015-01-01

    The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. PMID:25389078

  6. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    SciTech Connect

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy Michael

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  7. Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  8. Estimation of glucose kinetics in fetal-maternal studies: Potential errors, solutions, and limitations

    SciTech Connect

    Menon, R.K.; Bloch, C.A.; Sperling, M.A. )

    1990-06-01

    We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by (U-14C)glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. (U-14C)glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of (U-14C)glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant.

  9. Impact of provision of cardiovascular disease risk estimates to healthcare professionals and patients: a systematic review

    PubMed Central

    Usher-Smith, Juliet A; Silarova, Barbora; Schuit, Ewoud; GM Moons, Karel; Griffin, Simon J

    2015-01-01

    Objective To systematically review whether the provision of information on cardiovascular disease (CVD) risk to healthcare professionals and patients impacts their decision-making, behaviour and ultimately patient health. Design A systematic review. Data sources An electronic literature search of MEDLINE and PubMed from 01/01/2004 to 01/06/2013 with no language restriction and manual screening of reference lists of systematic reviews on similar topics and all included papers. Eligibility criteria for selecting studies (1) Primary research published in a peer-reviewed journal; (2) inclusion of participants with no history of CVD; (3) intervention strategy consisted of provision of a CVD risk model estimate to either professionals or patients; and (4) the only difference between the intervention group and control group (or the only intervention in the case of before-after studies) was the provision of a CVD risk model estimate. Results After duplicates were removed, the initial electronic search identified 9671 papers. We screened 196 papers at title and abstract level and included 17 studies. The heterogeneity of the studies limited the analysis, but together they showed that provision of risk information to patients improved the accuracy of risk perception without decreasing quality of life or increasing anxiety, but had little effect on lifestyle. Providing risk information to physicians increased prescribing of lipid-lowering and blood pressure medication, with greatest effects in those with CVD risk >20% (relative risk for change in prescribing 2.13 (1.02 to 4.63) and 2.38 (1.11 to 5.10) respectively). Overall, there was a trend towards reductions in cholesterol and blood pressure and a statistically significant reduction in modelled CVD risk (?0.39% (?0.71 to ?0.07)) after, on average, 12?months. Conclusions There seems evidence that providing CVD risk model estimates to professionals and patients improves perceived CVD risk and medical prescribing, with little evidence of harm on psychological well-being. PMID:26503388

  10. Error handling strategies in multiphase inverse modeling

    SciTech Connect

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  11. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times. PMID:26209956

  12. Bioequivalence tests based on individual estimates using non-compartmental or model-based analyses: evaluation of estimates of sample means and type I error for different designs

    PubMed Central

    Dubois, Anne; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2010-01-01

    The main objective of this work is to compare the standard bioequivalence tests based on individual estimates of the area under the curve and the maximal concentration obtained by non compartmental analysis (NCA) to those based on individual empirical Bayes estimates (EBE) obtained by nonlinear mixed effects models. We evaluate by simulation the precision of sample means estimates and the type I error of bioequivalence tests for both approaches. Crossover trials are simulated under H0 using different numbers of subjects (N) and of samples per subject (n). We simulate concentration-time profiles with different variability settings for the between-subject and within-subject variabilities and for the variance of the residual error. Bioequivalence tests based on NCA show satisfactory properties with low and high variabilities, except when the residual error is high which leads to a very poor type I error or when n is small which leads to biased estimates. Tests based on EBE lead to an increase of the type I error when the shrinkage is above 20% which occurs notably when NCA fails. In those cases, tests based on individual estimates cannot be used. PMID:19876723

  13. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

    NASA Astrophysics Data System (ADS)

    Assous, Franck; Chaskalovic, Joël

    2013-03-01

    In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

  14. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  15. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    SciTech Connect

    Shirasaki, Masato; Yoshida, Naoki

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ?1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of ?w {sub 0} ? 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density ?{sub m0}=0.256±{sub 0.046}{sup 0.054}.

  16. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  17. A merging scheme for constructing daily precipitation analyses based on objective bias-correction and error estimation techniques

    NASA Astrophysics Data System (ADS)

    Nie, Suping; Luo, Yong; Wu, Tongwen; Shi, Xueli; Wang, Zaizhi

    2015-09-01

    A new merging scheme (referred to as HL-OI) was developed to combine daily precipitation data from high-resolution gauge (HRG) observations, The Climate Prediction Center morphing technique (CMORPH) satellite estimates, and National Centers for Environmental Prediction (NCEP) numerical simulations over China to perform reliable high-resolution daily precipitation analyses. The scheme is designed using a three-step strategy of removing systemic biases, reducing random errors, quantitatively estimating error variances, and combining useful information from each data source. First, a cumulative distribution function matching procedure is adopted to reduce biases and provide unbiased background fields for the following merging processes. Second, the developed error estimation algorithm is implemented to quantify both the background and observation errors from the background departures. Third, the bias-corrected NCEP and CMORPH data are combined with the HRG data using the optimal interpolation (OI) objective analysis technique. The magnitudes and spatial structures of both observation errors and background errors can be estimated successfully. Results of cross-validation experiments show that the HL-OI scheme effectively removes most of systemic biases and random errors in the background fields compared to the independent gauge observations and is robust even with imperfect background fields. The HL-OI merging scheme significantly improves the temporal variations, agreements between the spatial patterns, frequency, and locations of daily precipitation occurrences. When information from gauge observations, satellite estimates, and model simulations are combined simultaneously, the merged multisource analyses perform better than dual-source analyses. These results indicate that each independent information source of daily precipitation contributes to improving the quality of the final merged analyses under the framework of HL-OI scheme.

  18. Methods for detecting and estimating population threshold concentrations for air pollution-related mortality with exposure measurement error

    SciTech Connect

    Cakmak, S.; Burnett, R.T.; Krewski, D.

    1999-06-01

    The association between daily fluctuations in ambient particulate matter and daily variations in nonaccidental mortality have been extensively investigated. Although it is now widely recognized that such an association exists, the form of the concentration-response model is still in question. Linear, no threshold and linear threshold models have been most commonly examined. In this paper the authors considered methods to detect and estimate threshold concentrations using time series data of daily mortality rates and air pollution concentrations. Because exposure is measured with error, they also considered the influence of measurement error in distinguishing between these two completing model specifications. The methods were illustrated on a 15-year daily time series of nonaccidental mortality and particulate air pollution data in Toronto, Canada. Nonparametric smoothed representations of the association between mortality and air pollution were adequate to graphically distinguish between these two forms. Weighted nonlinear regression methods for relative risk models were adequate to give nearly unbiased estimates of threshold concentrations even under conditions of extreme exposure measurement error. The uncertainty in the threshold estimates increased with the degree of exposure error. Regression models incorporating threshold concentrations could be clearly distinguished from linear relative risk models in the presence of exposure measurement error. The assumption of a linear model given that a threshold model was the correct form usually resulted in overestimates in the number of averted premature deaths, except for low threshold concentrations and large measurement error.

  19. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  20. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    SciTech Connect

    Zanolin, M.; Vitale, S.; Makris, N.

    2010-06-15

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

  1. Evaluation of Systematic and Random Errors of GPS Radio Occultation Bending Angles in the Neutral Atmosphere From the COSMIC/FORMOSAT-3 Mission

    NASA Astrophysics Data System (ADS)

    Schreiner, B.; Sokolovskiy, S.; Rocken, C.; Hunt, D.; Kuo, B.

    2008-12-01

    A fundamental observable that will be useful for future long-term climate studies is the GPS radio occultation (RO) bending angle. In this study we attempt to better understand and quantify random and systematic bending angle differences in collocated occultation pairs from the Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC) / Formosa Satellite 3 (FORMOSAT-3) mission. An important feature of the COSMIC six-satellite constellation is that immediately after launch, the six satellites were clustered together in one orbit. During the first months after the launch the separation between the satellites was about 1-2 s in time (about 10 km along the orbit) and gradually increased with time. This small separation allowed for pairs of closely collocated occultations from one GPS satellite with almost parallel occultation planes. Thus, this COSMIC cluster mode gives a unique opportunity to evaluate RO bending angle errors by examining the differences of the retrieved profiles from the collocated occultation pairs. An initial analysis of 4,700 pairs of collocated COSMIC (satellite #3 - satellite #4) occultations from Apr - Dec 2006 shows an RMS difference of bending angles of about 0.3 % at an impact height of 20 km and near zero mean differences from 5 to 40 km impact heights (for tangent point separations less than 10km). This analysis also shows mean bending angle differences of about 1 % at 50 km impact height and about 0.5 % at impact heights less than 5 km. An investigation into the causes of these mean differences will be presented. Additional results will be presented that consider variations of RO bending angle precision due to impact height, latitude, local time, tangent point separation distance, azimuth angle of occultation plane, GPS satellite oscillator type and near real-time versus post-processed analyses. Although the analysis of collocated occultations only provides estimates of precision and not accuracy, results from this study provide RO bending angle error characteristics that will be useful for the assimilation of these data by numerical weather models and also by climate models in the future.

  2. A suite of global reconstructed precipitation products and their error estimate by multivariate regression using empirical orthogonal functions: 1850-present

    NASA Astrophysics Data System (ADS)

    Shen, S. S.

    2014-12-01

    This presentation describes a suite of global precipitation products reconstructed by a multivariate regression method using an empirical orthogonal function (EOF) expansion. The sampling errors of the reconstruction are estimated for each product datum entry. The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). The temporal resolution ranges from 5-day, monthly, to seasonal and annual. The Global Precipitation Climatology Project (GPCP) precipitation data from 1979-2008 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed in detail for different EOF modes. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction examples of 1983 El Niño precipitation and 1917 La Niña precipitation (Figure 1) demonstrate that the El Niño and La Niña precipitation patterns are well reflected in the first two EOFs. The validation of our reconstruction results with GPCP makes it possible to use the reconstruction as the benchmark data for climate models. This will help the climate modeling community to improve model precipitation mechanisms and reduce the systematic difference between observed global precipitation, which hovers at around 2.7 mm/day for reconstructions and GPCP, and model precipitations, which have a range of 2.6-3.3 mm/day for CMIP5. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort between San Diego State University (Sam Shen, Nancy Tafolla, Barbara Sperberg, and Melanie Thorn) and University of Maryland (Phil Arkin, Tom Smith, Li Ren, and Li Dai) and supported in part by the U.S. National Science Foundation (Awards No. AGS-1015926 and AGS-1015957).

  3. Estimation and sample size calculations for correlated binary error rates of biometric identification devices

    E-print Network

    Schuckers, Michael E.

    a physiological measurement of an individual to a database of stored templates. The goal of any BID to within a specified margin of error,e. g. Snedecor and Cochran (1995). Sample size calcula- tions exist

  4. Elimination of Systematic Mass Measurement Errors in Liquid Chromatography-Mass Spectrometry Based Proteomics using Regression Models and a priori Partial Knowledge of the Sample Content

    SciTech Connect

    Petyuk, Vladislav A.; Jaitly, Navdeep; Moore, Ronald J.; Ding, Jie; Metz, Thomas O.; Tang, Keqi; Monroe, Matthew E.; Tolmachev, Aleksey V.; Adkins, Joshua N.; Belov, Mikhail E.; Dabney, Alan R.; Qian, Weijun; Camp, David G.; Smith, Richard D.

    2008-02-01

    The high mass measurement accuracy and precision available with recently developed mass spectrometers is increasingly used in proteomics analyses to confidently identify tryptic peptides from complex mixtures of proteins, as well as post-translational modifications and peptides from non-annotated proteins. To take full advantage of high mass measurement accuracy instruments it is necessary to limit systematic mass measurement errors. It is well known that errors in the measurement of m/z can be affected by experimental parameters that include e.g., outdated calibration coefficients, ion intensity, and temperature changes during the measurement. Traditionally, these variations have been corrected through the use of internal calibrants (well-characterized standards introduced with the sample being analyzed). In this paper we describe an alternative approach where the calibration is provided through the use of a priori knowledge of the sample being analyzed. Such an approach has previously been demonstrated based on the dependence of systematic error on m/z alone. To incorporate additional explanatory variables, we employed multidimensional, nonparametric regression models, which were evaluated using several commercially available instruments. The applied approach is shown to remove any noticeable biases from the overall mass measurement errors, and decreases the overall standard deviation of the mass measurement error distribution by 1.2- to 2-fold, depending on instrument type. Subsequent reduction of the random errors based on multiple measurements over consecutive spectra further improves accuracy and results in an overall decrease of the standard deviation by 1.8- to 3.7-fold. This new procedure will decrease the false discovery rates for peptide identifications using high accuracy mass measurements.

  5. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis

    PubMed Central

    Cornforth, Daniel M.; Matthews, Andrew; Brown, Sam P.; Raymond, Ben

    2015-01-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  6. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  7. Wrapper feature selection for small sample size data driven by complete error estimates.

    PubMed

    Macaš, Martin; Lhotská, Lenka; Bakstein, Eduard; Novák, Daniel; Wild, Ji?í; Sieger, Tomáš; Vostatek, Pavel; Jech, Robert

    2012-10-01

    This paper focuses on wrapper-based feature selection for a 1-nearest neighbor classifier. We consider in particular the case of a small sample size with a few hundred instances, which is common in biomedical applications. We propose a technique for calculating the complete bootstrap for a 1-nearest-neighbor classifier (i.e., averaging over all desired test/train partitions of the data). The complete bootstrap and the complete cross-validation error estimate with lower variance are applied as novel selection criteria and are compared with the standard bootstrap and cross-validation in combination with three optimization techniques - sequential forward selection (SFS), binary particle swarm optimization (BPSO) and simplified social impact theory based optimization (SSITO). The experimental comparison based on ten datasets draws the following conclusions: for all three search methods examined here, the complete criteria are a significantly better choice than standard 2-fold cross-validation, 10-fold cross-validation and bootstrap with 50 trials irrespective of the selected output number of iterations. All the complete criterion-based 1NN wrappers with SFS search performed better than the widely-used FILTER and SIMBA methods. We also demonstrate the benefits and properties of our approaches on an important and novel real-world application of automatic detection of the subthalamic nucleus. PMID:22472029

  8. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  9. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  10. A priori error estimates for an hp-version of the discontinuous Galerkin method for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. Tinsley

    1993-01-01

    A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.

  11. Output feedback direct adaptive neural network control for uncertain SISO nonlinear systems using a fuzzy estimator of the control error.

    PubMed

    Chemachema, Mohamed

    2012-12-01

    A direct adaptive control algorithm, based on neural networks (NN) is presented for a class of single input single output (SISO) nonlinear systems. The proposed controller is implemented without a priori knowledge of the nonlinear systems; and only the output of the system is considered available for measurement. Contrary to the approaches available in the literature, in the proposed controller, the updating signal used in the adaptive laws is an estimate of the control error, which is directly related to the NN weights instead of the tracking error. A fuzzy inference system (FIS) is introduced to get an estimate of the control error. Without any additional control term to the NN adaptive controller, all the signals involved in the closed loop are proven to be exponentially bounded and hence the stability of the system. Simulation results demonstrate the effectiveness of the proposed approach. PMID:23037773

  12. Strict upper and lower bounds of stress intensity factors at 2D elastic notches based on constitutive relation error estimation

    NASA Astrophysics Data System (ADS)

    Wang, Li; Zhong, Hongzhi

    2015-11-01

    This paper aims to evaluate the stress intensity factors (SIFs) at 2D elastic notches that are of concerns in structural failure analysis. Strict upper and lower bounds of the SIFs are acquired by a unified approach within the framework of the constitutive relation error (CRE) estimation. The main ingredient is to gain a unified representation of the SIFs which is achieved by a path independent integral with the aid of the specialized auxiliary fields. With the unified approach, one can (1) Establish the dual problem based on the unified representation of the SIFs; (2) Perform dual error analysis, resulting in the mixture of errors in both primal and dual problems; (3) Acquire strict upper and lower bounds of the SIFs by utilizing the featured strict bounding property of the CRE estimation. Numerical examples are studied to illustrate the strict bounding properties of SIFs at cracks and notches.

  13. Estimating Error in Using Ambient PM2.5Concentrations as Proxies for Personal Exposures: A Review

    EPA Science Inventory

    Several methods have been used to account for measurement error inherent in using the ambient concentration of particulate matter < 2.5 µm (PM2.5, ug/m,3) as a proxy for personal exposure. Common features of such methods are their reliance on the estimated ...

  14. Spatial accounting for errors in LiDAR-derived products: Snow volume and snow water equivalent estimation

    NASA Astrophysics Data System (ADS)

    Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.

    2011-12-01

    Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.

  15. Journal of Financial Econometrics, 2015, Vol. 13, No. 2, 478--504 Rounding Errors and Volatility Estimation

    E-print Network

    Mykland, Per A.

    Estimation YINGYING LI Department of Information Systems, Business Statistics and Operations Management, Hong statistical improvement. (JEL: C02, C13,C14) KEYWORDS: rounding errors, bias-correction, diffusion process comments. Address correspondence to Yingying Li, Department of Information Systems, Business Statistics

  16. ON A PRIORI ERROR ESTIMATES FOR A TWO-PHASE MOVING-INTERFACE PROBLEM WITH KINETIC CONDITION

    E-print Network

    Eindhoven, Technische Universiteit

    the penetration of a sharp carbonation front into unsaturated cement-based materials. The special feature carbonation reaction concentrated on the moving boundary. We prove a priori error estimates reaction-slow transport scenarios in porous media is to employ a so-called moving-interface model

  17. A Factorial Evaluation of Effects of Model Specification and Error on Parameter Estimation in a Structural Equation Model.

    ERIC Educational Resources Information Center

    Farley, John U.; Reddy, Srinivas K.

    1987-01-01

    In an experiment manipulating artificial data in a factorial design, model misspecification and varying levels of error in measurement and in model structure are shown to have significant effects on LISREL parameter estimates in a modified peer influence model. (Author/LMO)

  18. Error Modeling and Estimation Fusion for Indoor Localization Weipeng Zhuo Bo Zhang S.-H. Gary Chan

    E-print Network

    Chan, Shueng-Han Gary

    Global Positioning System (GPS) has already achieved high accuracy, it only works well in outdoor openError Modeling and Estimation Fusion for Indoor Localization Weipeng Zhuo Bo Zhang S.-H. Gary Chan@google.com Abstract--There has been much interest in offering multime- dia location-based service (LBS) to indoor

  19. Speech Enhancement of Spectral Magnitude Bin Trajectories using Gaussian Mixture-Model based Minimum Mean-Square Error Estimators

    E-print Network

    Speech Enhancement of Spectral Magnitude Bin Trajectories using Gaussian Mixture-Model based mean-square error es- timators have been applied to speech enhancement in the tem- poral, transform (e-based MMSE estimator to spectral magnitude-bin tra- jectories. In addition, methods for incorporating speech

  20. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  1. Errors and parameter estimation in precipitation-runoff modeling 1. Theory.

    USGS Publications Warehouse

    Troutman, B.M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. -from Author

  2. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    SciTech Connect

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  3. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.

  4. PM2.5 of ambient origin: Estimates and exposure errors relevant to PM epidemiology

    PubMed Central

    Meng, Qing Yu; Turpin, Barbara J.; Polidori, Andrea; Lee, Jong Hoon; Weisel, Clifford; Morandi, Maria; Colome, Steven; Stock, Thomas; Winer, Arthur; Zhang, Jim

    2008-01-01

    Epidemiological studies routinely use central-site particulate matter (PM) as a surrogate for exposure to PM of ambient (outdoor) origin. Below we quantify exposure errors that arise from variations in particle infiltration to aid evaluation of the use of this surrogate, rather than actual exposure, in PM epidemiology. Measurements from 114 homes in 3 cities from the Relationship of Indoor, Outdoor and Personal Air (RIOPA) study were used. Indoor PM2.5 of outdoor origin was calculated: 1) assuming a constant infiltration factor, as would be the case if central-site PM were a “perfect surrogate” for exposure to outdoor particles; 2) including variations in measured air exchange rates across homes; 3) also incorporating home-to-home variations in particle composition, and 4) calculating sample-specific infiltration factors. The final estimates of PM2.5 of outdoor origin take into account variations in building construction, ventilation practices, and particle properties that result in home-to-home and day-to-day variations in particle infiltration. As assumptions became more realistic (from the first, most constrained model to the fourth, least constrained model), the mean concentration of PM2.5 of outdoor origin increased. Perhaps more importantly, the bandwidth of the distribution increased. These results quantify several ways in which the use of central site PM results in underestimates of the ambient PM2.5 exposure distribution bandwidth. The result is larger uncertainties in relative risk factors for PM2.5 than would occur if epidemiological studies used more accurate exposure measures. In certain situations this can lead to bias. PMID:16082937

  5. The Gulliver Effect: The Impact of Error in an Elephantine Subpopulation on Estimates for Lilliputian Subpopulations

    ERIC Educational Resources Information Center

    Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene

    2009-01-01

    An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…

  6. Is the four-day rotation of Venus illusory?. [includes systematic error in radial velocities of solar lines reflected from Venus

    NASA Technical Reports Server (NTRS)

    Young, A. T.

    1974-01-01

    An overlooked systematic error exists in the apparent radial velocities of solar lines reflected from regions of Venus near the terminator, owing to a combination of the finite angular size of the Sun and its large (2 km/sec) equatorial velocity of rotation. This error produces an apparent, but fictitious, retrograde component of planetary rotation, typically on the order of 40 meters/sec. Spectroscopic, photometric, and radiometric evidence against a 4-day atmospheric rotation is also reviewed. The bulk of the somewhat contradictory evidence seems to favor slow motions, on the order of 5 m/sec, in the atmosphere of Venus; the 4-day rotation may be due to a traveling wave-like disturbance, not bulk motions, driven by the UV albedo differences.

  7. Heats of formation of solids with error estimation: The mBEEF functional with and without fitted reference energies

    NASA Astrophysics Data System (ADS)

    Pandey, Mohnish; Jacobsen, Karsten W.

    2015-06-01

    The need for prediction of accurate electronic binding energies has led to the development of different schemes for combining density functional calculations, typically at the level of the generalized gradient approximation (GGA), with experimental information. We analyze one such scheme by Stevanovi? et al. [Phys. Rev. B 85, 115104 (2012), 10.1103/PhysRevB.85.115104] for predictions of compound enthalpies of formation using fitted elemental-phase reference energies. We show that different versions of GGA with or without +U and a meta-GGA (TPSS) lead to comparable accuracy after fitting the reference energies. Our results also show that the recently developed mBEEF, a Bayesian error estimation functional, gives comparable accuracy with the other functionals even without the fitting. The mBEEF functional furthermore supplies an ensemble estimate of the prediction errors in reasonable agreement with the actual errors. We also show that using the fitting scheme on the mBEEF ensemble leads to improved accuracy including realistic error estimation.

  8. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  9. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  10. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  11. Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model over the Indian monsoon region from an extended range forecast perspective

    NASA Astrophysics Data System (ADS)

    Pattnaik, S.; Abhilash, S.; De, S.; Sahai, A. K.; Phani, R.; Goswami, B. N.

    2013-07-01

    This study investigates the influence of Simplified Arakawa Schubert (SAS) and Relax Arakawa Schubert (RAS) cumulus parameterization schemes on coupled Climate Forecast System version.1 (CFS-1, T62L64) retrospective forecasts over Indian monsoon region from an extended range forecast perspective. The forecast data sets comprise 45 days of model integrations based on 31 different initial conditions at pentad intervals starting from 1 May to 28 September for the years 2001 to 2007. It is found that mean climatological features of Indian summer monsoon months (JJAS) are reasonably simulated by both the versions (i.e. SAS and RAS) of the model; however strong cross equatorial flow and excess stratiform rainfall are noted in RAS compared to SAS. Both the versions of the model overestimated apparent heat source and moisture sink compared to NCEP/NCAR reanalysis. The prognosis evaluation of daily forecast climatology reveals robust systematic warming (moistening) in RAS and cooling (drying) biases in SAS particularly at the middle and upper troposphere of the model respectively. Using error energy/variance and root mean square error methodology it is also established that major contribution to the model total error is coming from the systematic component of the model error. It is also found that the forecast error growth of temperature in RAS is less than that of SAS; however, the scenario is reversed for moisture errors, although the difference of moisture errors between these two forecasts is not very large compared to that of temperature errors. Broadly, it is found that both the versions of the model are underestimating (overestimating) the rainfall area and amount over the Indian land region (and neighborhood oceanic region). The rainfall forecast results at pentad interval exhibited that, SAS and RAS have good prediction skills over the Indian monsoon core zone and Arabian Sea. There is less excess rainfall particularly over oceanic region in RAS up to 30 days of forecast duration compared to SAS. It is also evident that systematic errors in the coverage area of excess rainfall over the eastern foothills of the Himalayas remains unchanged irrespective of cumulus parameterization and initial conditions. It is revealed that due to stronger moisture transport in RAS there is a robust amplification of moist static energy facilitating intense convective instability within the model and boosting the moisture supply from surface to the upper levels through convergence. Concurrently, moisture detrainment from cloud to environment at multiple levels from the spectrum of clouds in the RAS, leads to a large accumulation of moisture in the middle and upper troposphere of the model. This abundant moisture leads to large scale condensational heating through a simple cloud microphysics scheme. This intense upper level heating contributes to the warm bias and considerably increases in stratiform rainfall in RAS compared to SAS. In a nutshell, concerted and sustained support of moisture supply from the bottom as well as from the top in RAS is the crucial factor for having a warm temperature bias in RAS.

  12. Systematic angle random walk estimation of the constant rate biased ring laser gyro.

    PubMed

    Yu, Huapeng; Wu, Wenqi; Wu, Meiping; Feng, Guohu; Hao, Ming

    2013-01-01

    An actual account of the angle random walk (ARW) coefficients of gyros in the constant rate biased rate ring laser gyro (RLG) inertial navigation system (INS) is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS) for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS. PMID:23447008

  13. Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro

    PubMed Central

    Yu, Huapeng; Wu, Wenqi; Wu, Meiping; Feng, Guohu; Hao, Ming

    2013-01-01

    An actual account of the angle random walk (ARW) coefficients of gyros in the constant rate biased rate ring laser gyro (RLG) inertial navigation system (INS) is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS) for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS. PMID:23447008

  14. Surface Error Estimation of Pseudo-ParabolicSurface Made by Using Gore Membranes

    NASA Astrophysics Data System (ADS)

    Nagata, Tomoko; Ishida, Ryohei

    In this paper, we describe the surface error of pseudo-parabolic surface, to construct an inflatable parabolic reflector. The gore sheet is generated by cutting the three-dimensional parabolic surface. A scheme of generating the gore sheet is described, and the rms surface error between the parabolic surface and the three-dimensional shape composed of the gore sheets is proposed and formulated. The rms surface error between the parabolic surface and the shape produced by pressurizing the circular membrane is also formulated. Finally, the possibility that the parabolic reflector composed of the gore sheets has high surface accuracy is shown.

  15. Estimation of organic carbon blank values and error structures of the speciation trends network data for source apportionment

    SciTech Connect

    Eugene Kim; Philip K. Hopke; Youjun Qin

    2005-08-01

    Because the particulate organic carbon (OC) concentrations reported in U.S. Environment Protection Agency Speciation Trends Network (STN) data were not blank corrected, the OC blank concentrations were estimated using the intercept in particulate matter {lt} 2.5 {mu}m in aerodynamic diameter (PM2.5) regression against OC concentrations. The estimated OC blank concentrations ranged from 1 to 2.4 {mu}g/m{sup 3} showing higher values in urban areas for the 13 monitoring sites in the northeastern United States. In the STN data, several different samplers and analyzers are used, and various instruments show different method detection limit (MDL) values, as well as errors. A comprehensive set of error structures that would be used for numerous source apportionment studies of STN data was estimated by comparing a limited set of measured concentrations and their associated uncertainties. To examine the estimated error structures and investigate the appropriate MDL values, PM2.5 samples collected at a STN site in Burlington, VT, were analyzed through the application of the positive matrix factorization. A total of 323 samples that were collected between December 2000 and December 2003 and 49 species based on several variable selection criteria were used, and eight sources were successfully identified in this study with the estimated error structures and min values among different MDL values from the five instruments: secondary sulfate aerosol (41%) identified as the result of emissions from coal-fired power plants, secondary nitrate aerosol (20%), airborne soil (15%), gasoline vehicle emissions (7%), diesel emissions (7%), aged sea salt (4%), copper smelting (3%), and ferrous smelting (2%). Time series plots of contributions from airborne soil indicate that the highly elevated impacts from this source were likely caused primarily by dust storms.

  16. IMPROVED ERROR ESTIMATES FOR FIRST ORDER SIGMADELTA SYSTEMS C. Sinan G unt urk

    E-print Network

    Güntürk, Sinan

    . There are few mathematical results in the literature; Gray in [3] derives results for the mean square error when engineering literature is that qn = sign(u n 1 ), but this system is equivalent to the one above up

  17. Estimating model and observation error covariance information for land data assimilation systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In order to operate efficiently, data assimilation systems require accurate assumptions concerning the statistical magnitude and cross-correlation structure of error in model forecasts and assimilated observations. Such information is seldom available for the operational implementation of land data ...

  18. Error and uncertainty in estimates of Reynolds stress using ADCP in an energetic ocean state

    E-print Network

    Rapo, Mark Andrew.

    2006-01-01

    (cont.) To that end, the space-time correlations of the error, turbulence, and wave processes are developed and then utilized to find the extent to which the environmental and internal processing parameters contribute to ...

  19. Effectiveness of Barcoding for Reducing Patient Specimen and Laboratory Testing Identification Errors: A Laboratory Medicine Best Practices Systematic Review and Meta-Analysis

    PubMed Central

    Snyder, Susan R.; Favoretto, Alessandra M.; Derzon, James H.; Christenson, Robert; Kahn, Stephen; Shaw, Colleen; Baetz, Rich Ann; Mass, Diana; Fantz, Corrine; Raab, Stephen; Tanasijevic, Milenko; Liebow, Edward B.

    2015-01-01

    Objectives This is the first systematic review of the effectiveness of barcoding practices for reducing patient specimen and laboratory testing identification errors. Design and Methods The CDC-funded Laboratory Medicine Best Practices Initiative systematic review methods for quality improvement practices were used. Results A total of 17 observational studies reporting on barcoding systems are included in the body of evidence; 10 for patient specimens and 7 for point-of-care testing. All 17 studies favored barcoding, with meta-analysis mean odds ratios for barcoding systems of 4.39 (95% CI: 3.05 – 6.32) and for point-of-care testing of 5.93 (95% CI: 5.28 – 6.67). Conclusions Barcoding is effective for reducing patient specimen and laboratory testing identification errors in diverse hospital settings and is recommended as an evidence-based “best practice.” The overall strength of evidence rating is high and the effect size rating is substantial. Unpublished studies made an important contribution comprising almost half of the body of evidence. PMID:22750145

  20. Reduction of Systematic Errors in Diagnostic Receivers Through the Use of Balanced Dicke-Switching and Y-Factor Noise Calibrations

    SciTech Connect

    John Musson, Trent Allison, Roger Flood, Jianxun Yan

    2009-05-01

    Receivers designed for diagnostic applications range from those having moderate sensitivity to those possessing large dynamic range. Digital receivers have a dynamic range which are a function of the number of bits represented by the ADC and subsequent processing. If some of this range is sacrificed for extreme sensitivity, noise power can then be used to perform two-point load calibrations. Since load temperatures can be precisely determined, the receiver can be quickly and accurately characterized; minute changes in system gain can then be detected, and systematic errors corrected. In addition, using receiver pairs in a balanced approach to measuring X+, X-, Y+, Y-, reduces systematic offset errors from non-identical system gains, and changes in system performance. This paper describes and demonstrates a balanced BPM-style diagnostic receiver, employing Dicke-switching to establish and maintain real-time system calibration. Benefits of such a receiver include wide bandwidth, solid absolute accuracy, improved position accuracy, and phase-sensitive measurements. System description, static and dynamic modelling, and measurement data are presented.

  1. Reducing Modeling Error of Graphical Methods for Estimating Volume of Distribution Measurements in PIB-PET study

    PubMed Central

    Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M

    2010-01-01

    Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid ? radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196

  2. Systematic Parameter Estimation of a Density-Dependent Groundwater-Flow and Solute-Transport Model

    NASA Astrophysics Data System (ADS)

    Stanko, Z.; Nishikawa, T.; Traum, J. A.

    2013-12-01

    A SEAWAT-based, flow and transport model of seawater-intrusion was developed for the Santa Barbara groundwater basin in southern California that utilizes dual-domain porosity. Model calibration can be difficult when simulating flow and transport in large-scale hydrologic systems with extensive heterogeneity. To facilitate calibration, the hydrogeologic properties in this model are based on the fraction of coarse and fine-grained sediment interpolated from drillers' logs. This approach prevents over-parameterization by assigning one set of parameters to coarse material and another set to fine material. Estimated parameters include boundary conditions (such as areal recharge and surface-water seepage), hydraulic conductivities, dispersivities, and mass-transfer rate. As a result, the model has 44 parameters that were estimated by using the parameter-estimation software PEST, which uses the Gauss-Marquardt-Levenberg algorithm, along with various features such as singular value decomposition to improve calibration efficiency. The model is calibrated by using 36 years of observed water-level and chloride-concentration measurements, as well as first-order changes in head and concentration. Prior information on hydraulic properties is also provided to PEST as additional observations. The calibration objective is to minimize the squared sum of weighted residuals. In addition, observation sensitivities are investigated to effectively calibrate the model. An iterative parameter-estimation procedure is used to dynamically calibrate steady state and transient simulation models. The resulting head and concentration states from the steady-state-model provide the initial conditions for the transient model. The transient calibration provides updated parameter values for the next steady-state simulation. This process repeats until a reasonable fit is obtained. Preliminary results from the systematic calibration process indicate that tuning PEST by using a set of synthesized observations generated from model output reduces execution times significantly. Parameter sensitivity analyses indicate that both simulated heads and chloride concentrations are sensitive to the ocean boundary conductance parameter. Conversely, simulated heads are sensitive to some parameters, such as specific fault conductances, but chloride concentrations are insensitive to the same parameters. Heads are specifically found to be insensitive to mobile domain texture but sensitive to hydraulic conductivity and specific storage. The chloride concentrations are insensitive to some hydraulic conductivity and fault parameters but sensitive to mass transfer rate and longitudinal dispersivity. Future work includes investigating the effects of parameter and texture characterization uncertainties on seawater intrusion simulations.

  3. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  4. Sensitivity analysis of state and hydraulic conductivity estimates obtained using an Ensemble Smoother to hydraulic conductivity mean and variance errors

    NASA Astrophysics Data System (ADS)

    Briseño, J.; Herrera, G. S.

    2012-04-01

    Hydraulic conductivity (K) has considerable spatial variability and since it is measured indirectly, its estimates have high uncertainty. Estimating aquifer parameters, such as K, with certainty allows generating more certain groundwater flow and contaminant concentration predictions through numerical models. For that reason, producing good K field estimates is very important for groundwater modelers. With the increase in the number of devices that allow measuring hydraulic head (h) in real time and with more options and technologies for collecting groundwater contaminant concentration (c) samples, methods to estimate aquifer parameters using that kind of data, on top of K data, can be very useful. On the other hand, it would be a plus if it is possible to estimate h and c at the same time. The ensemble smoother (ES) was proposed by van Leeuwen and Evensen in 1996 and tested with a two-layer nonlinear quasigeostrophic model for Eddy-oceanographic current interactions; the sources of uncertainty considered in the model were initial conditions and measurement errors. The ES is similar to simple kriging in space and time, using an ensemble representation for the space-time error covariance matrix. Herrera, in 1998, developed independently a version of this method for space-time optimization of groundwater quality sampling networks. To our knowledge this was the first work in which an ES was used in the groundwater literature. In previous developments Briseño and Herrera extended the ES proposed by Herrera, to estimate the logarithm of hydraulic conductivity (lnK), together with hydraulic head (h) and contaminant concentration (c), and illustrated its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of lnK, random realizations of this parameter are obtained through Latin Hypercube Sampling; 2) The stochastic model is used to produce hydraulic head (h) and contaminant (c) realizations, for each one of the conductivity realizations, with these realizations the space-time cross covariance matrix lnK-h-c are obtained; 3) Finally, the lnK, h and c estimates are obtained using the ES. Since usually the parameters of the semivariogram of lnK are not known perfectly, the main objective of this work is to analyze the sensitivity of these estimates when two of these parameters, the mean and variance of lnK, have errors. Some case studies were established to estimate lnK, h and c using different data sets that can include h and/or c measurements. The results indicate that the sensitivity of the ES estimates for lnK, h and c using h and c data, is small. Keywords: Parameter estimation, groundwater transport models, Ensemble Smoother, stochastic models.

  5. An experimental study on estimating human error probability (HEP) parameters for PSA/HRA by using human model simulation.

    PubMed

    Yoshikawa, H; Wu, W

    1999-11-01

    A framework of Human Error Probability (HEP) parameters, which is needed for Human Reliability Analysis (HRA) within a practice of Probabilistic Safety Assessment (PSA) of Nuclear Power Plant is first proposed. Then a laboratory experiment was conducted in order to construct a computer simulation model (human model) that describes human cognitive behaviour on detecting and diagnosing plant anomaly causes. An inter-comparison between experimental data and human model simulation was performed to estimate Human Cognitive Reliability (HCR) curves, in order to confirm the applicability of a human model for estimating these HEP parameters for PSA/HRA practice. PMID:10582040

  6. A learning-based wrapper method to correct systematic errors in automatic image segmentation: consistently improved performance in hippocampus, cortex and brain segmentation.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu R; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A

    2011-04-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  7. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris

    PubMed Central

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-01-01

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276

  8. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris.

    PubMed

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-01-01

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276

  9. Estimating numerical errors due to operator splitting in global atmospheric chemistry models: Transport and chemistry

    NASA Astrophysics Data System (ADS)

    Santillana, Mauricio; Zhang, Lin; Yantosca, Robert

    2016-01-01

    We present upper bounds for the numerical errors introduced when using operator splitting methods to integrate transport and non-linear chemistry processes in global chemical transport models (CTM). We show that (a) operator splitting strategies that evaluate the stiff non-linear chemistry operator at the end of the time step are more accurate, and (b) the results of numerical simulations that use different operator splitting strategies differ by at most 10%, in a prototype one-dimensional non-linear chemistry-transport model. We find similar upper bounds in operator splitting numerical errors in global CTM simulations.

  10. Error analysis

    SciTech Connect

    Gardner, R.H.; O'Neill, R.V.

    1981-01-01

    Error analysis is the systematic determination of uncertainties in model predictions due to all possible sources of variability. The objective of these studies in error analysis has been to investigate phenomena associated with prediction uncertainty over as broad a range of ecosystem models as possible, in order to: (1) develop guidelines that will permit the design of experiments and models which minimize prediction error; and (2) develop and test error analysis methodologies and make these available to ecosystem modelers and researchers. The approach to the study of model error has been inductive. A Monte Carlo simulation approach has been applied to a variety of individual models and general patterns that would be applicable across a broad range of ecological models looked for. The purpose of this paper is to review current progress in error analysis of ecological models.

  11. Impacts of real-time satellite clock errors on GPS precise point positioning-based troposphere zenith delay estimation

    NASA Astrophysics Data System (ADS)

    Shi, Junbo; Xu, Chaoqian; Li, Yihe; Gao, Yang

    2015-08-01

    Global Positioning System (GPS) has become a cost-effective tool to determine troposphere zenith total delay (ZTD) with accuracy comparable to other atmospheric sensors such as the radiosonde, the water vapor radiometer, the radio occultation and so on. However, the high accuracy of GPS troposphere ZTD estimates relies on the precise satellite orbit and clock products available with various latencies. Although the International GNSS Service (IGS) can provide predicted orbit and clock products for real-time applications, the predicted clock accuracy of 3 ns cannot always guarantee the high accuracy of troposphere ZTD estimates. Such limitations could be overcome by the use of the newly launched IGS real-time service which provides 5 cm orbit and 0.2-1.0 ns (an equivalent range error of 6-30 cm) clock products in real time. Considering the relatively larger magnitude of the clock error than that of the orbit error, this paper investigates the effect of real-time satellite clock errors on the GPS precise point positioning (PPP)-based troposphere ZTD estimation. Meanwhile, how the real-time satellite clock errors impact the GPS PPP-based troposphere ZTD estimation has also been studied to obtain the most precise ZTD solutions. First, two types of real-time satellite clock products are assessed with respect to the IGS final clock product in terms of accuracy and precision. Second, the real-time GPS PPP-based troposphere ZTD estimation is conducted using data from 34 selected IGS stations over three independent weeks in April, July and October, 2013. Numerical results demonstrate that the precision, rather than the accuracy, of the real-time satellite clock products impacts the real-time PPP-based ZTD solutions more significantly. In other words, the real-time satellite clock product with better precision leads to more precise real-time PPP-based troposphere ZTD solutions. Therefore, it is suggested that users should select and apply real-time satellite products with better clock precision to obtain more consistent real-time PPP-based ZTD solutions.

  12. Statistical error in a chord estimator of correlation dimension: The rule of five''

    SciTech Connect

    Theiler, J. ); Lookman, T. . Dept. of Applied Mathematics)

    1992-09-01

    The statistical precision of a chord method for estimating dimension from a correlation integral is derived. The optimal chord length is determined, and a comparison is made to other estimators. The simple chord estimator is only 25% less precise than the optimal estimator which uses the full resolution and full range of the correlation integral. The analytic calculations are based on the hypothesis that all pairwise distances between the points in the embedding space are statistically independent. The adequacy of this approximation is assessed numerically, and a surprising result is observed in which dimension estimators can be anomalously precise for sets with reasonably uniform (nonfractal) distributions.

  13. Error estimates on the approximate finite volume solution of convection diffusion equations

    E-print Network

    Vignal, Marie-Hélène

    study here the convergence of a finite volume scheme for a diffusion-convection equation on an open an "s points" (where s is the number of sides of each cell) finite volume scheme and for the convection term an upstream finite volume scheme. Assuming the exact solution at least in H2 we prove error

  14. On a-posteriori pointwise error estimation using adjoint temperature and Lagrange A.K. Alekseeva

    E-print Network

    Region 141070, Russian Federation b Department of Mathematics and C.S.I.T., Florida State University on the local error influence was used for grid refining. The Richardson extrapolation [21,29,30] is the most the convergence rate ([35]). A correct use of Richardson extrapolation requires a set of grids to prove

  15. NETRA: Interactive Display for Estimating Refractive Errors and Focal Range Vitor F. Pamplona1,2

    E-print Network

    Oliveira, Manuel M.

    to create an effective, low-cost interface sensitive to refractive parameters of the human eye. We create that is extremely sensitive to parame- ters of the human eye, like refractive errors, focal range, focusing speed pre-warp position and angle of ray-beams from this display to counteract the effect of eye lens

  16. Positional accommodative intraocular lens power error induced by the estimation of the corneal power and the effective lens position

    PubMed Central

    Piñero, David P; Camps, Vicente J; Ramón, María L; Mateo, Verónica; Pérez-Cambrodí, Rafael J

    2015-01-01

    Purpose: To evaluate the predictability of the refractive correction achieved with a positional accommodating intraocular lenses (IOL) and to develop a potential optimization of it by minimizing the error associated with the keratometric estimation of the corneal power and by developing a predictive formula for the effective lens position (ELP). Materials and Methods: Clinical data from 25 eyes of 14 patients (age range, 52–77 years) and undergoing cataract surgery with implantation of the accommodating IOL Crystalens HD (Bausch and Lomb) were retrospectively reviewed. In all cases, the calculation of an adjusted IOL power (PIOLadj) based on Gaussian optics considering the residual refractive error was done using a variable keratometric index value (nkadj) for corneal power estimation with and without using an estimation algorithm for ELP obtained by multiple regression analysis (ELPadj). PIOLadj was compared to the real IOL power implanted (PIOLReal, calculated with the SRK-T formula) and also to the values estimated by the Haigis, HofferQ, and Holladay I formulas. Results: No statistically significant differences were found between PIOLReal and PIOLadj when ELPadj was used (P = 0.10), with a range of agreement between calculations of 1.23 D. In contrast, PIOLReal was significantly higher when compared to PIOLadj without using ELPadj and also compared to the values estimated by the other formulas. Conclusions: Predictable refractive outcomes can be obtained with the accommodating IOL Crystalens HD using a variable keratometric index for corneal power estimation and by estimating ELP with an algorithm dependent on anatomical factors and age. PMID:26139807

  17. Dementia risk estimates associated with measures of depression: a systematic review and meta-analysis

    PubMed Central

    Anstey, Kaarin J

    2015-01-01

    Objectives To perform a systematic review of reported HRs of all cause dementia, Alzheimer's disease (AD) and vascular dementia (VaD) for late-life depression and depressive symptomatology on specific screening instruments at specific thresholds. Design Meta-analysis with meta-regression. Setting and participants PubMed, PsycInfo, and Cochrane databases were searched through 28 February 2014. Articles reporting HRs for incident all-cause dementia, AD and VaD based on published clinical criteria using validated measures of clinical depression or symptomatology from prospective studies of general population of adults were selected by consensus among multiple reviewers. Studies that did not use clinical dementia diagnoses or validated instruments for the assessment of depression were excluded. Data were extracted by two reviewers and reviewed by two other independent reviewers. The most specific analyses possible using continuous symptomatology ratings and categorical measures of clinical depression focusing on single instruments with defined reported cut-offs were conducted. Primary outcome measures HRs for all-cause dementia, AD, and VaD were computed where possible for continuous depression scores, or for major depression assessed with single or comparable validated instruments. Results Searches yielded 121?301 articles, of which 36 (0.03%) were eligible. Included studies provided a combined sample size of 66?532 individuals including 6593 cases of dementia, 2797 cases of AD and 585 cases of VaD. The increased risk associated with depression did not significantly differ by type of dementia and ranged from 83% to 104% for diagnostic thresholds consistent with major depression. Risk associated with continuous depression symptomatology measures were consistent with those for clinical thresholds. Conclusions Late-life depression is consistently and similarly associated with a twofold increased risk of dementia. The precise risk estimates produced in this study for specific instruments at specified thresholds will assist evidence-based medicine and inform policy on this important population health issue. PMID:26692556

  18. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  19. Satellite Sampling and Retrieval Errors in Regional Monthly Rain Estimates from TMI AMSR-E, SSM/I, AMSU-B and the TRMM PR

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Wolff, David B.

    2010-01-01

    Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.

  20. Harmonic inpainting of the cosmic microwave background sky: Formulation and error estimate

    SciTech Connect

    Inoue, Kaiki Taro; Cabella, Paolo; Komatsu, Eiichiro

    2008-06-15

    We develop a new interpolation scheme, based on harmonic inpainting, for reconstructing the cosmic microwave background temperature data within the Galaxy mask from the data outside the mask. We find that, for scale-invariant isotropic random Gaussian fluctuations, the developed algorithm reduces the errors in the reconstructed map for the odd-parity modes significantly for azimuthally symmetric masks with constant galactic latitudes. For a more realistic Galaxy mask, we find a modest improvement in the even-parity modes as well.