Science.gov

Sample records for account measurement errors

  1. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    PubMed

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  2. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

  3. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    PubMed

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  4. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    PubMed

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Investigation of the effects of correlated measurement errors in time series analysis techniques applied to nuclear material accountancy data. [Program COVAR

    SciTech Connect

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-04-01

    It has been shown in previous work that the Kalman Filter and Linear Smoother produces optimal estimates of inventory and loss from a material balance area. The assumptions of the Kalman Filter/Linear Smoother approach assume no correlation between inventory measurement error nor does it allow for serial correlation in these measurement errors. The purpose of this report is to extend the previous results by relaxing these assumptions to allow for correlation of measurement errors. The results show how to account for correlated measurement errors in the linear system model of the Kalman Filter/Linear Smoother. An algorithm is also included for calculating the required error covariance matrices.

  6. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Season Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance...

  7. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  8. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 96.56 Section 96.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS NOX...

  9. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 96.156 Section 96.156 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX...

  10. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 96.56 Section 96.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS NOX...

  11. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 96.156 Section 96.156 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX...

  12. Navy Stock Account (NSA) Material Expenditure Errors

    DTIC Science & Technology

    1990-12-01

    AD-A241 855 NAVAL POSTGRADUATE SCHOOL Monterey, California ,.V ST A C -,. S ’GR t- D TIQ : 0T.281 " THESIS & NAVY STOCK ACCOUNT (NSA: ATEP.AL...Stock Account (NSA) Material Expenditure Errors 12, PERSONAL AUTHOR(S) Magsombol, Anacleto M. 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF REPORT (Year...Continue on reverse if necessary andl identify by block number) FIELD GROUP SUB-GROUP Expenditures, Navy Stock Account (NSA), Recon- ciliation Process

  13. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  14. Measuring Test Measurement Error: A General Approach

    ERIC Educational Resources Information Center

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  15. Effects of past and recent blood pressure and cholesterol level on coronary heart disease and stroke mortality, accounting for measurement error.

    PubMed

    Boshuizen, Hendriek C; Lanti, Mariapaola; Menotti, Alessandro; Moschandreas, Joanna; Tolonen, Hanna; Nissinen, Aulikki; Nedeljkovic, Srecko; Kafatos, Anthony; Kromhout, Daan

    2007-02-15

    The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality due to coronary heart disease and stroke among subjects aged 65 years or more from nine cohorts of the Seven Countries Study. The two-step method of Tsiatis et al. (J Am Stat Assoc 1995;90:27-37) was used to adjust for regression dilution bias, and results were compared with those obtained using more commonly applied methods of adjustment for regression dilution bias. It was found that the commonly used univariate adjustment for regression dilution bias overestimates the effects of both SBP and cholesterol compared with multivariate methods. Also, the two-step method makes better use of the information available, resulting in smaller confidence intervals. Results comparing recent and past exposure indicated that past SBP is more important than recent SBP in terms of its effect on coronary heart disease mortality, while both recent and past values seem to be important for effects of cholesterol on coronary heart disease mortality and effects of SBP on stroke mortality. Associations between serum cholesterol concentration and risk of stroke mortality are weak.

  16. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...

  17. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...

  18. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...

  19. Accounting for correlated errors in inverse radiation transport problems.

    SciTech Connect

    Mattingly, John K.; Stork, Christopher Lyle; Thomas, Edward Victor

    2010-11-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is solved by finding the set of transport model variables that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature predicted by the hypothesized model parameters. The weights per channel are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. In the current treatment, the implicit assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. In this paper, an alternative method that accounts for correlated errors between channels is described and illustrated for inverse problems based on gamma spectroscopy.

  20. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    PubMed

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  1. Pendulum Shifts, Context, Error, and Personal Accountability

    SciTech Connect

    Harold Blackman; Oren Hester

    2011-09-01

    This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.

  2. Precise accounting of bit errors in floating-point computations

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2009-08-01

    Floating-point computation generates errors at the bit level through four processes, namely, overflow, underflow, truncation, and rounding. Overflow and underflow can be detected electronically, and represent systematic errors that are not of interest in this study. Truncation occurs during shifting toward the least-significant bit (herein called right-shifting), and rounding error occurs at the least significant bit. Such errors are not easy to track precisely using published means. Statistical error propagation theory typically yields conservative estimates that are grossly inadequate for deep computational cascades. Forward error analysis theory developed for image and signal processing or matrix operations can yield a more realistic typical case, but the error of the estimate tends to be high in relationship to the estimated error. In this paper, we discuss emerging technology for forward error analysis, which allows an algorithm designer to precisely estimate the output error of a given operation within a computational cascade, under a prespecified set of constraints on input error and computational precision. This technique, called bit accounting, precisely tracks the number of rounding and truncation errors in each bit position of interest to the algorithm designer. Because all errors associated with specific bit positions are tracked, and because integer addition only is involved in error estimation, the error of the estimate is zero. The technique of bit accounting is evaluated for its utility in image and signal processing. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm being analyzed, and its error estimation algorithm. Because of the significant overhead involved in error representation, it is shown that bit accounting is less useful for real-time error estimation, but is well suited to analysis in support of algorithm design.

  3. Better Stability with Measurement Errors

    NASA Astrophysics Data System (ADS)

    Argun, Aykut; Volpe, Giovanni

    2016-06-01

    Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.

  4. Noise in neural populations accounts for errors in working memory.

    PubMed

    Bays, Paul M

    2014-03-05

    Errors in short-term memory increase with the quantity of information stored, limiting the complexity of cognition and behavior. In visual memory, attempts to account for errors in terms of allocation of a limited pool of working memory resources have met with some success, but the biological basis for this cognitive architecture is unclear. An alternative perspective attributes recall errors to noise in tuned populations of neurons that encode stimulus features in spiking activity. I show that errors associated with decreasing signal strength in probabilistically spiking neurons reproduce the pattern of failures in human recall under increasing memory load. In particular, deviations from the normal distribution that are characteristic of working memory errors and have been attributed previously to guesses or variability in precision are shown to arise as a natural consequence of decoding populations of tuned neurons. Observers possess fine control over memory representations and prioritize accurate storage of behaviorally relevant information, at a cost to lower priority stimuli. I show that changing the input drive to neurons encoding a prioritized stimulus biases population activity in a manner that reproduces this empirical tradeoff in memory precision. In a task in which predictive cues indicate stimuli most probable for test, human observers use the cues in an optimal manner to maximize performance, within the constraints imposed by neural noise.

  5. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  6. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  7. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  8. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  9. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015

  10. Measurement error in air pollution exposure assessment.

    PubMed

    Navidi, W; Lurmann, F

    1995-01-01

    The exposure of an individual to an air pollutant can be assessed indirectly, with a "microenvironmental" approach, or directly with a personal sampler. Both methods of assessment are subject to measurement error, which can cause considerable bias in estimates of health effects. If the exposure estimates are unbiased and the measurement error is nondifferential, the bias in a linear model can be corrected when the variance of the measurement error is known. Unless the measurement error is quite large, estimates of health effects based on individual exposures appear to be more accurate than those based on ambient levels.

  11. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  12. A log-likelihood-gain intensity target for crystallographic phasing that accounts for experimental error

    PubMed Central

    Read, Randy J.; McCoy, Airlie J.

    2016-01-01

    The crystallographic diffraction experiment measures Bragg intensities; crystallo­graphic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978 ▸), Acta Cryst. A35, 517–525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124

  13. Conditional Standard Error of Measurement in Prediction.

    ERIC Educational Resources Information Center

    Woodruff, David

    1990-01-01

    A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…

  14. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  15. Error margin for antenna gain measurements

    NASA Technical Reports Server (NTRS)

    Cable, V.

    2002-01-01

    The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.

  16. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  17. Prediction with measurement errors in finite populations

    PubMed Central

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2011-01-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors. PMID:22162621

  18. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  19. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  20. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  1. Gear Transmission Error Measurement System Made Operational

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  2. Errors of measurement by laser goniometer

    NASA Astrophysics Data System (ADS)

    Agapov, Mikhail Y.; Bournashev, Milhail N.

    2000-11-01

    The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.

  3. Measurement process error determination and control

    SciTech Connect

    Everhart, J.

    1992-01-01

    Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960's by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.

  4. Measurement process error determination and control

    SciTech Connect

    Everhart, J.

    1992-11-01

    Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960`s by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.

  5. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  6. Criticality measurements for SNM accountability

    SciTech Connect

    Bohman, J.; Martin, E.R.; Butterfield, K.; Paternoster, R.

    1998-03-01

    Based on extensive operating experience with the Godiva IV fast metal burst assembly at Los Alamos National Laboratory, the authors were able to create data plots for reactivity worths of standard configurations at various temperatures and room return locations. These plots show that the material uncertainties in criticality measurements are within {+-} 20 grams out of the 65.4 kilogram HEU Godiva core. This is superior to active neutron well coincidence counter (AWCC) measurements. The criticality measurements have the additional advantage of not requiring disassembly of the reactor. No disassembly means the measurement takes less time--it can be done during each operation--and there is less dose to measurement personnel.

  7. Efficient measurement of quantum gate error by interleaved randomized benchmarking.

    PubMed

    Magesan, Easwar; Gambetta, Jay M; Johnson, B R; Ryan, Colm A; Chow, Jerry M; Merkel, Seth T; da Silva, Marcus P; Keefe, George A; Rothwell, Mary B; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-08-24

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates X(π/2) and Y(π/2). These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  8. Measuring accounts receivable performance: a comprehensive method.

    PubMed

    Newton, R L

    1993-05-01

    Nonperforming assets, such as accounts receivable, are frequently cited as sources of financial difficulty for hospitals. Yet, many hospitals, relying on the traditional measure of accounts receivable--days revenue outstanding--may not have a true grasp of the real cost of their accounts receivable. The author discusses the costs imposed on a hospital by accounts receivable and describes three cost components that must be calculated if the true cost of accounts receivable is to be determined and controlled.

  9. Accounting for sampling variability, injury under-reporting, and sensor error in concussion injury risk curves.

    PubMed

    Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B

    2015-09-18

    There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves.

  10. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  11. Neutron multiplication error in TRU waste measurements

    SciTech Connect

    Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  12. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  13. Analysis and improvement of gas turbine blade temperature measurement error

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  14. Multiple Indicators, Multiple Causes Measurement Error Models

    PubMed Central

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535

  15. Multiple indicators, multiple causes measurement error models

    DOE PAGES

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...

    2014-06-25

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less

  16. Multiple indicators, multiple causes measurement error models

    SciTech Connect

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-06-25

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.

  17. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  18. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  19. Risk, Error and Accountability: Improving the Practice of School Leaders

    ERIC Educational Resources Information Center

    Perry, Lee-Anne

    2006-01-01

    This paper seeks to explore the notion of risk as an organisational logic within schools, the impact of contemporary accountability regimes on managing risk and then, in turn, to posit a systems-based process of risk management underpinned by a positive logic of risk. It moves through a number of steps beginning with the development of an…

  20. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  1. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  2. Application of Uniform Measurement Error Distribution

    DTIC Science & Technology

    2016-03-18

    should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of...Uniform Measurement Error Distribution 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Ghazarians, Alan; Jackson, Dennis...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME

  3. Sonic Anemometer Vertical Wind Speed Measurement Errors

    NASA Astrophysics Data System (ADS)

    Kochendorfer, J.; Horst, T. W.; Frank, J. M.; Massman, W. J.; Meyers, T. P.

    2014-12-01

    In eddy covariance studies, errors in the measured vertical wind speed cause errors of a similar magnitude in the vertical fluxes of energy and mass. Several recent studies on the accuracy of sonic anemometer measurements indicate that non-orthogonal sonic anemometers used in eddy covariance studies underestimate the vertical wind speed. It has been suggested that this underestimation is caused by flow distortion from the interference of the structure of the anemometer itself on the flow. When oriented ideally with respect to the horizontal wind direction, orthogonal sonic anemometers that measure the vertical wind speed with a single vertically-oriented acoustic path may measure the vertical wind speed more accurately in typical surface-layer conditions. For non-orthogonal sonic anemometers, Horst et al. (2014) proposed that transducer shadowing may be a dominant factor in sonic flow distortion. As the ratio of sonic transducer diameter to path length and the zenith angle of the three transducer paths decrease, the effects of transducer shadowing on measurements of vertical velocity will decrease. An overview of this research and some of the methods available to correct historical data will be presented.

  4. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  5. Non-Gaussian error distribution of 7Li abundance measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Houston, Stephen; Ratra, Bharat

    2015-07-01

    We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.

  6. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  7. Errors Associated With Measurements from Imaging Probes

    NASA Astrophysics Data System (ADS)

    Heymsfield, A.; Bansemer, A.

    2015-12-01

    Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.

  8. Detection system for ocular refractive error measurement.

    PubMed

    Ventura, L; de Faria e Sousa, S J; de Castro, J C

    1998-05-01

    An automatic and objective system for measuring ocular refractive errors (myopia, hyperopia and astigmatism) was developed. The system consists of projecting a light target (a ring), using a diode laser (lambda = 850 nm), at the fundus of the patient's eye. The light beams scattered from the retina are submitted to an optical system and are analysed with regard to their vergence by a CCD detector (matrix). This system uses the same basic principle for the projection of beams into the tested eye as some commercial refractors, but it is innovative regarding the ring-shaped measuring target for the projection system and the detection system where a matrix detector provides a wider range of measurement and a less complex system for the optical alignment. Also a dedicated electronic circuit was not necessary for treating the electronic signals from the detector (as the usual refractors do); instead a commercial frame grabber was used and software based on the heuristic search technique was developed. All the guiding equations that describe the system as well as the image processing procedure are presented in detail. Measurements in model eyes and in human eyes are in good agreement with retinoscopic measurements and they are also as precise as these kinds of measurements require (0.125D and 5 degrees).

  9. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  10. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  11. [Therapeutic errors and dose measuring devices].

    PubMed

    García-Tornel, S; Torrent, M L; Sentís, J; Estella, G; Estruch, M A

    1982-06-01

    In order to investigate the possibilities of therapeutical error in syrups administration, authors have measured the capacity of 158 home spoons (x +/- SD). They classified spoons in four groups: group I (table spoons), 49 units (11.65 +/- 2.10 cc); group II (tea spoons), 41 units (4.70+/-1.04 cc); group III (coffee spoons), 41 units (2.60 +/- 0.59 cc), and group IV (miscellaneous), 27 units. They have compared the first three groups with theoreticals values of 15, 5 and 2.5 cc, respectively, ensuring, in the first group, significant statistical differences. In this way, they analyzed information that paediatricians receive from "vademecums", which they usually consult and have studied two points: If syrup has a meter or not, and if it indicates drug concentration or not. Only a 18% of the syrups have a meter and about 88% of the drugs indicate their concentration (mg/cc). They conclude that to prevent errors of dosage, the pharmacological industry must include meters in their products. If they haven't the safest thing is to use syringes.

  12. Inter-tester Agreement in Refractive Error Measurements

    PubMed Central

    Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang

    2014-01-01

    Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter

  13. Reducing Errors by Use of Redundancy in Gravity Measurements

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.

  14. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    PubMed

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  15. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  16. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  17. The impact of response measurement error on the analysis of designed experiments

    SciTech Connect

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  19. Observer error in blood pressure measurement.

    PubMed Central

    Neufeld, P D; Johnson, D L

    1986-01-01

    This paper describes an experiment undertaken to determine observer error in measuring blood pressure by the auscultatory method. A microcomputer was used to display a simulated mercury manometer and play back tape-recorded Korotkoff sounds synchronized with the fall of the mercury column. Each observer's readings were entered into the computer, which displayed a histogram of all readings taken up to that point and thus showed the variation among observers. The procedure, which could easily be adapted for use in teaching, was used to test 311 observers drawn from physicians, nurses, medical students, nursing students and others at nine health care institutions in Ottawa. The results showed a strong bias for even-digit readings and standard deviations of roughly 5 to 6 mm Hg. The standard deviation for the systolic readings was somewhat smaller for the physicians as a group than for the nurses (3.5 v. 5.9 mm Hg). However, the standard deviations for the diastolic readings were roughly equal for these two groups (approximately 5.5 mm Hg). Images Fig. 1 PMID:3756693

  20. Scattering error corrections for in situ absorption and attenuation measurements.

    PubMed

    McKee, David; Piskozub, Jacek; Brown, Ian

    2008-11-24

    Monte Carlo simulations are used to establish a weighting function that describes the collection of angular scattering for the WETLabs AC-9 reflecting tube absorption meter. The equivalent weighting function for the AC-9 attenuation sensor is found to be well approximated by a binary step function with photons scattered between zero and the collection half-width angle contributing to the scattering error and photons scattered at larger angles making zero contribution. A new scattering error correction procedure is developed that accounts for scattering collection artifacts in both absorption and attenuation measurements. The new correction method does not assume zero absorption in the near infrared (NIR), does not assume a wavelength independent scattering phase function, but does require simultaneous measurements of spectrally matched particulate backscattering. The new method is based on an iterative approach that assumes that the scattering phase function can be adequately modeled from estimates of particulate backscattering ratio and Fournier-Forand phase functions. It is applied to sets of in situ data representative of clear ocean water, moderately turbid coastal water and highly turbid coastal water. Initial results suggest significantly higher levels of attenuation and absorption than those obtained using previously published scattering error correction procedures. Scattering signals from each correction procedure have similar magnitudes but significant differences in spectral distribution are observed.

  1. Design methodology accounting for fabrication errors in manufactured modified Fresnel lenses for controlled LED illumination.

    PubMed

    Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill

    2015-07-27

    The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology.

  2. Monitoring the Random Errors of Nuclear Material Measurements

    SciTech Connect

    ,

    1980-06-01

    Monitoring and controlling random errors is an important function of a measurement control program. This report describes the principal sources of random error in the common nuclear material measurement processes and the most important elements of a program for monitoring, evaluating and controlling the random error standard deviations of these processes.

  3. Measuring errors and adverse events in health care.

    PubMed

    Thomas, Eric J; Petersen, Laura A

    2003-01-01

    In this paper, we identify 8 methods used to measure errors and adverse events in health care and discuss their strengths and weaknesses. We focus on the reliability and validity of each, as well as the ability to detect latent errors (or system errors) versus active errors and adverse events. We propose a general framework to help health care providers, researchers, and administrators choose the most appropriate methods to meet their patient safety measurement goals.

  4. Chromosomal locus tracking with proper accounting of static and dynamic errors

    NASA Astrophysics Data System (ADS)

    Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.

    2015-06-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors.

  5. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies.

  6. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    ERIC Educational Resources Information Center

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  7. Detecting errors and anomalies in computerized materials control and accountability databases

    SciTech Connect

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-12-31

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.

  8. ESPRESSO: taking into account assessment errors on outcome and exposures in power analysis for association studies

    PubMed Central

    Gaye, Amadou; Burton, Thomas W. Y.; Burton, Paul R.

    2015-01-01

    Motivation: Very large studies are required to provide sufficiently big sample sizes for adequately powered association analyses. This can be an expensive undertaking and it is important that an accurate sample size is identified. For more realistic sample size calculation and power analysis, the impact of unmeasured aetiological determinants and the quality of measurement of both outcome and explanatory variables should be taken into account. Conventional methods to analyse power use closed-form solutions that are not flexible enough to cater for all of these elements easily. They often result in a potentially substantial overestimation of the actual power. Results: In this article, we describe the Estimating Sample-size and Power in R by Exploring Simulated Study Outcomes tool that allows assessment errors in power calculation under various biomedical scenarios to be incorporated. We also report a real world analysis where we used this tool to answer an important strategic question for an existing cohort. Availability and implementation: The software is available for online calculation and downloads at http://espresso-research.org. The code is freely available at https://github.com/ESPRESSO-research. Contact: louqman@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25908791

  9. Filtered kriging for spatial data with heterogeneous measurement error variances.

    PubMed

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.

  10. MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.

    SciTech Connect

    CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.

    2004-07-05

    The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.

  11. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  12. Reverse attenuation in interaction terms due to covariate measurement error.

    PubMed

    Muff, Stefanie; Keller, Lukas F

    2015-11-01

    Covariate measurement error may cause biases in parameters of regression coefficients in generalized linear models. The influence of measurement error on interaction parameters has, however, only rarely been investigated in depth, and if so, attenuation effects were reported. In this paper, we show that also reverse attenuation of interaction effects may emerge, namely when heteroscedastic measurement error or sampling variances of a mismeasured covariate are present, which are not unrealistic scenarios in practice. Theoretical findings are illustrated with simulations. A Bayesian approach employing integrated nested Laplace approximations is suggested to model the heteroscedastic measurement error and covariate variances, and an application shows that the method is able to reveal approximately correct parameter estimates.

  13. Error analysis for a laser differential confocal radius measurement system.

    PubMed

    Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu

    2015-02-10

    In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13  μm+0.9  ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.

  14. Thinking Scientifically: Understanding Measurement and Errors

    ERIC Educational Resources Information Center

    Alagumalai, Sivakumar

    2015-01-01

    Thinking scientifically consists of systematic observation, experiment, measurement, and the testing and modification of research questions. In effect, science is about measurement and the understanding of causation. Measurement is an integral part of science and engineering, and has pertinent implications for the human sciences. No measurement is…

  15. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  16. Deconvolution Estimation in Measurement Error Models: The R Package decon

    PubMed Central

    Wang, Xiao-Feng; Wang, Bin

    2011-01-01

    Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139

  17. Pressure Change Measurement Leak Testing Errors

    SciTech Connect

    Pryor, Jeff M; Walker, William C

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  18. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  19. The Impact of Covariate Measurement Error on Risk Prediction

    PubMed Central

    Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna

    2015-01-01

    In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses’ Health Study. PMID:25865315

  20. Electrochemically modulated separations for material accountability measurements

    SciTech Connect

    Hazelton, Sandra G.; Liezers, Martin; Naes, Benjamin E.; Arrigo, Leah M.; Duckworth, Douglas C.

    2012-07-08

    A method for the accurate and timely analysis of accountable materials is critical for safeguards measurements in nuclear fuel reprocessing plants. Non-destructive analysis (NDA) methods, such as gamma spectroscopy, are desirable for their ability to produce near real-time data. However, the high gamma background of the actinides and fission products in spent nuclear fuel limits the use of NDA for real-time online measurements. A simple approach for at-line separation of materials would facilitate the use of at-line detection methods. A promising at-line separation method for plutonium and uranium is electrochemically modulated separations (EMS). Using an electrochemical cell with an anodized glassy carbon electrode, Pu and U oxidation states can be altered by applying an appropriate voltage. Because the affinity of the actinides for the electrode depends on their oxidation states, selective deposition can be turned “on” and “off” with changes in the applied target electrode voltage. A high surface-area cell was designed in house for the separation of Pu from spent nuclear fuel. The cell is shown to capture over 1 µg of material, increasing the likelihood for gamma spectroscopic detection of Pu extracted from dissolver solutions. The large surface area of the electrode also reduces the impact of competitive interferences from some fission products. Flow rates of up to 1 mL min-1 with >50% analyte deposition efficiency are possible, allowing for rapid separations to be effected. Results from the increased surface-area EMS cell are presented, including dilute dissolver solution simulant data.

  1. Temperature error in radiation thermometry caused by emissivity and reflectance measurement error.

    PubMed

    Corwin, R R; Rodenburghii, A

    1994-04-01

    A general expression for the temperature error caused by emissivity uncertainty is developed, and it is concluded that lower-wavelength systems provide significantly less temperature error. A technique to measure the normal emissivity is proposed that uses a normally incident light beam and an aperture to collect a portion of the energy reflected from the surface and to measure essentially both the specular component and the biangular reflectance at the edge of the aperture. The theoretical results show that the aperture size need not be substantial to provide reasonably low temperature errors for a broad class of materials and surface reflectance conditions.

  2. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    PubMed

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  3. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  4. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    PubMed Central

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    2017-01-01

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors. PMID:28381982

  5. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  6. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    PubMed

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  7. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  8. Laser Doppler anemometer measurements using nonorthogonal velocity components - Error estimates

    NASA Technical Reports Server (NTRS)

    Orloff, K. L.; Snyder, P. K.

    1982-01-01

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  9. Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.

    PubMed

    Orloff, K L; Snyder, P K

    1982-01-15

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  10. Hypothesis testing in an errors-in-variables model with heteroscedastic measurement errors.

    PubMed

    de Castro, Mário; Galea, Manuel; Bolfarine, Heleno

    2008-11-10

    In many epidemiological studies it is common to resort to regression models relating incidence of a disease and its risk factors. The main goal of this paper is to consider inference on such models with error-prone observations and variances of the measurement errors changing across observations. We suppose that the observations follow a bivariate normal distribution and the measurement errors are normally distributed. Aggregate data allow the estimation of the error variances. Maximum likelihood estimates are computed numerically via the EM algorithm. Consistent estimation of the asymptotic variance of the maximum likelihood estimators is also discussed. Test statistics are proposed for testing hypotheses of interest. Further, we implement a simple graphical device that enables an assessment of the model's goodness of fit. Results of simulations concerning the properties of the test statistics are reported. The approach is illustrated with data from the WHO MONICA Project on cardiovascular disease.

  11. Validation of Large-Scale Geophysical Estimates Using In Situ Measurements with Representativeness Error

    NASA Astrophysics Data System (ADS)

    Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.

    2015-12-01

    Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.

  12. Non-Gaussian Error Distributions of LMC Distance Moduli Measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Ratra, Bharat

    2015-12-01

    We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.

  13. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  14. Temperature measurement error simulation of the pure rotational Raman lidar

    NASA Astrophysics Data System (ADS)

    Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang

    2015-11-01

    Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.

  15. Space acceleration measurement system triaxial sensor head error budget

    NASA Technical Reports Server (NTRS)

    Thomas, John E.; Peters, Rex B.; Finley, Brian D.

    1992-01-01

    The objective of the Space Acceleration Measurement System (SAMS) is to measure and record the microgravity environment for a given experiment aboard the Space Shuttle. To accomplish this, SAMS uses remote triaxial sensor heads (TSH) that can be mounted directly on or near an experiment. The errors of the TSH are reduced by calibrating it before and after each flight. The associated error budget for the calibration procedure is discussed here.

  16. Identification and Minimization of Errors in Doppler Global Velocimetry Measurements

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Lee, Joseph W.

    2000-01-01

    A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.

  17. Measuring worst-case errors in a robot workcell

    SciTech Connect

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  18. Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior

    PubMed Central

    Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor

    2015-01-01

    Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315

  19. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    PubMed Central

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  20. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    PubMed

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  1. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  2. Errors Associated with the Direct Measurement of Radionuclides in Wounds

    SciTech Connect

    Hickman, D P

    2006-03-02

    Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and

  3. Filter induced errors in laser anemometer measurements using counter processors

    NASA Technical Reports Server (NTRS)

    Oberle, L. G.; Seasholtz, R. G.

    1985-01-01

    Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.

  4. Corneal topography measurement by means of radial shearing interference: Part III - measurement errors

    NASA Astrophysics Data System (ADS)

    Kowalik, Waldemar W.; Garncarz, Beata E.; Kasprzak, Henryk T.

    This work contains results of computer simulation researches, which define requirements for measurement conditions, which should be fulfilled so that measurement results ensure allowable errors. They define: allowable measurement errors (interferogram's scanning) and conditions, which should fulfill computer programs, so that errors introduced by mathematical operations and computer are the smallest.

  5. Measurement uncertainty evaluation of conicity error inspected on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  6. Laser tracker error determination using a network measurement

    NASA Astrophysics Data System (ADS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-04-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.

  7. Stronger error disturbance relations for incompatible quantum measurements

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Chiranjib; Shukla, Namrata; Pati, Arun Kumar

    2016-03-01

    We formulate a new error-disturbance relation, which is free from explicit dependence upon variances in observables. This error-disturbance relation shows improvement over the one provided by the Branciard inequality and the Ozawa inequality for some initial states and for a particular class of joint measurements under consideration. We also prove a modified form of Ozawa's error-disturbance relation. The latter relation provides a tighter bound compared to the Ozawa and the Branciard inequalities for a small number of states.

  8. Determination of drill paths for percutaneous cochlear access accounting for target positioning error

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael

    2007-03-01

    In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.

  9. Beam induced vacuum measurement error in BEPC II

    NASA Astrophysics Data System (ADS)

    Huang, Tao; Xiao, Qiong; Peng, XiaoHua; Wang, HaiJing

    2011-12-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  10. 50 CFR 622.49 - Accountability measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF, AND SOUTH ATLANTIC Management Measures.... (5) Black sea bass—(i) Commercial fishery. If commercial landings, as estimated by the SRD, reach or... the recreational ACL of 409,000 lb (185,519 kg), gutted weight, and black sea bass are...

  11. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...

  12. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... quota described in § 648.232 will be harvested and shall close the EEZ to fishing for spiny dogfish...

  13. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...

  14. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...

  15. Estimation of errors in diffraction data measured by CCD area detectors

    PubMed Central

    Waterman, David; Evans, Gwyndaf

    2010-01-01

    Current methods for diffraction-spot integration from CCD area detectors typically underestimate the errors in the measured intensities. In an attempt to understand fully and identify correctly the sources of all contributions to these errors, a simulation of a CCD-based area-detector module has been produced to address the problem of correct handling of data from such detectors. Using this simulation, it has been shown how, and by how much, measurement errors are underestimated. A model of the detector statistics is presented and an adapted summation integration routine that takes this into account is shown to result in more realistic error estimates. In addition, the effect of correlations between pixels on two-dimensional profile fitting is demonstrated and the problems surrounding improvements to profile-fitting algorithms are discussed. In practice, this requires knowledge of the expected correlation between pixels in the image. PMID:27006649

  16. Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements

    USGS Publications Warehouse

    Majewski, M.S.

    1997-01-01

    Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.

  17. Selected error sources in resistance measurements on superconductors

    NASA Astrophysics Data System (ADS)

    García-Vázquez, Valentín; Pérez-Amaro, Neftalí; Canizo-Cabrera, A.; Cumplido-Espíndola, B.; Martínez-Hernández, R.; Abarca-Ramírez, M. A.

    2001-08-01

    In order to investigate the causes that produce some of the unwanted effects observed in the resistance versus temperature profiles, a variety of sources of error for resistance measurements in superconductors, using a standard four-probe configuration, have been studied. A piece of superconducting Y1Ba2Cu3O7-x ceramic material has been used as the test sample, and the resulting effects in both accuracy and precision in its temperature dependent resistance are reported here. Studied measurement error sources include thermal emf's, temperature sweep rates, Faraday currents, electrical-contact failures at the sample's surface, thermal contractions at mechanically attached instrumental wires, external electromagnetic fields, and slow sampling rates during data acquisition. Details of the experimental setup and its measurement error function are also given.

  18. Spatial regression with covariate measurement error: A semiparametric approach.

    PubMed

    Huque, Md Hamidul; Bondell, Howard D; Carroll, Raymond J; Ryan, Louise M

    2016-09-01

    Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modeling because of the presence of spatial correlation among the observations. We propose a semiparametric regression approach to obtain bias-corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice.

  19. A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis

    PubMed Central

    Jiao, Yan

    2016-01-01

    Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963

  20. Error-disturbance uncertainty relations in neutron spin measurements

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan

    2016-05-01

    Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.

  1. Error compensation research on the focal plane attitude measurement instrument

    NASA Astrophysics Data System (ADS)

    Zhou, Hongfei; Zhang, Feifan; Zhai, Chao; Zhou, Zengxiang; Liu, Zhigang; Wang, Jianping

    2016-07-01

    The surface accuracy of astronomical telescope focal plate is a key indicator to precision stellar observation. Combined with the six DOF parallel focal plane attitude measurement instrument that had been already designed, space attitude error compensation of the attitude measurement instrument for the focal plane was studied in order to measure the deformation and surface shape of the focal plane in different space attitude accurately.

  2. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Winnitoy, Susan

    2012-01-01

    measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.

  3. Performance-Based Measurement: Action for Organizations and HPT Accountability

    ERIC Educational Resources Information Center

    Larbi-Apau, Josephine A.; Moseley, James L.

    2010-01-01

    Basic measurements and applications of six selected general but critical operational performance-based indicators--effectiveness, efficiency, productivity, profitability, return on investment, and benefit-cost ratio--are presented. With each measurement, goals and potential impact are explored. Errors, risks, limitations to measurements, and a…

  4. The effect of measurement error on surveillance metrics

    SciTech Connect

    Weaver, Brian Phillip; Hamada, Michael S.

    2012-04-24

    The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.

  5. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.

  6. Phase error analysis and compensation considering ambient light for phase measuring profilometry

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing

    2014-04-01

    The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.

  7. 50 CFR 660.509 - Accountability measures (season closures).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...

  8. 50 CFR 660.509 - Accountability measures (season closures).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...

  9. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  10. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  11. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  12. Three Approximations of Standard Error of Measurement: An Empirical Approach.

    ERIC Educational Resources Information Center

    Garvin, Alfred D.

    Three successively simpler formulas for approximating the standard error of measurement were derived by applying successively more simplifying assumptions to the standard formula based on the standard deviation and the Kuder-Richardson formula 20 estimate of reliability. The accuracy of each of these three formulas, with respect to the standard…

  13. Putting reward in art: A tentative prediction error account of visual art

    PubMed Central

    Van de Cruys, Sander; Wagemans, Johan

    2011-01-01

    The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260

  14. Comparing measurement errors for formants in synthetic and natural vowelsa)

    PubMed Central

    Shadle, Christine H.; Nam, Hosung; Whalen, D. H.

    2016-01-01

    The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295–1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555

  15. Simultaneous inference and bias analysis for longitudinal data with covariate measurement error and missing responses.

    PubMed

    Yi, G Y; Liu, W; Wu, Lang

    2011-03-01

    Longitudinal data arise frequently in medical studies and it is common practice to analyze such data with generalized linear mixed models. Such models enable us to account for various types of heterogeneity, including between- and within-subjects ones. Inferential procedures complicate dramatically when missing observations or measurement error arise. In the literature, there has been considerable interest in accommodating either incompleteness or covariate measurement error under random effects models. However, there is relatively little work concerning both features simultaneously. There is a need to fill up this gap as longitudinal data do often have both characteristics. In this article, our objectives are to study simultaneous impact of missingness and covariate measurement error on inferential procedures and to develop a valid method that is both computationally feasible and theoretically valid. Simulation studies are conducted to assess the performance of the proposed method, and a real example is analyzed with the proposed method.

  16. Error Correction for Foot Clearance in Real-Time Measurement

    NASA Astrophysics Data System (ADS)

    Wahab, Y.; Bakar, N. A.; Mazalan, M.

    2014-04-01

    Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.

  17. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    PubMed

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  18. The Role of Measurement Error in Familiar Statistics

    DTIC Science & Technology

    2006-06-01

    0rga niz ational =’" Researh Methods ,Volume 9 Number 1 lanuary 2006 99-112The Role of Measurement Error 2006 SagePublications 10.1177...inextricably. Educational and Psychological Measurement, 62, 254-263. Carretta, T. R. (1997). Group differences on U.S. Air Force pilot selection...analysis of the statistical and ethical implications of various defi- nitions of "test bias." Psychological Bulletin, 83, 1053-107 1. Hunter, J. E

  19. Simultaneous Treatment of Missing Data and Measurement Error in HIV Research using Multiple Overimputation

    PubMed Central

    Schomaker, Michael; Hogger, Sara; Johnson, Leigh F.; Hoffmann, Christopher J.; Bärnighausen, Till; Heumann, Christian

    2015-01-01

    Background Both CD4 count and viral load in HIV infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. Methods We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of lnCD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Results Multiple overimputation emphasizes more strongly the influence of having a high baseline CD4 counts compared to a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm3 vs. <25 cells/mm3: 0.21 [95%CI: 0.18;0.24] vs. 0.38 [0.29;0.48] and 0.29 [0.25;0.34] respectively). Similar results are obtained when varying assumptions about the measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Conclusions Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research. PMID:26214336

  20. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

    NASA Astrophysics Data System (ADS)

    Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

    2016-09-01

    To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

  1. Error analysis for NMR polymer microstructure measurement without calibration standards.

    PubMed

    Qiu, XiaoHua; Zhou, Zhe; Gobbi, Gian; Redwine, Oscar D

    2009-10-15

    We report an error analysis method for primary analytical methods in the absence of calibration standards. Quantitative (13)C NMR analysis of ethylene/1-octene (E/O) copolymers is given as an example. Because the method is based on a self-calibration scheme established by counting, it is a measure of accuracy rather than precision. We demonstrate it is self-consistent and neither underestimate nor excessively overestimate the experimental errors. We also show the method identified previously unknown systematic biases in a NMR instrument. The method can eliminate unnecessary data averaging to save valuable NMR resources. The accuracy estimate proposed is not unique to (13)C NMR spectroscopy of E/O but should be applicable to all other measurement systems where the accuracy of a subset of the measured responses can be established.

  2. Confounding and exposure measurement error in air pollution epidemiology.

    PubMed

    Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert

    2012-06-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.

  3. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  4. Error and uncertainty in Raman thermal conductivity measurements

    SciTech Connect

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  5. Error reduction in retrievals of atmospheric species from symmetrically measured lidar sounding absorption spectra.

    PubMed

    Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T

    2014-10-20

    We report new methods for retrieving atmospheric constituents from symmetrically-measured lidar-sounding absorption spectra. The forward model accounts for laser line-center frequency noise and broadened line-shape, and is essentially linearized by linking estimated optical-depths to the mixing ratios. Errors from the spectral distortion and laser frequency drift are substantially reduced by averaging optical-depths at each pair of symmetric wavelength channels. Retrieval errors from measurement noise and model bias are analyzed parametrically and numerically for multiple atmospheric layers, to provide deeper insight. Errors from surface height and reflectance variations are reduced to tolerable levels by "averaging before log" with pulse-by-pulse ranging knowledge incorporated.

  6. New time-domain three-point error separation methods for measurement roundness and spindle error motion

    NASA Astrophysics Data System (ADS)

    Liu, Wenwen; Tao, Tingting; Zeng, Hao

    2016-10-01

    Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.

  7. Effects of cosine error in irradiance measurements from field ocean color radiometers.

    PubMed

    Zibordi, Giuseppe; Bulgarelli, Barbara

    2007-08-01

    The cosine error of in situ seven-channel radiometers designed to measure the in-air downward irradiance for ocean color applications was investigated in the 412-683 nm spectral range with a sample of three instruments. The interchannel variability of cosine errors showed values generally lower than +/-3% below 50 degrees incidence angle with extreme values of approximately 4-20% (absolute) at 50-80 degrees for the channels at 412 and 443 nm. The intrachannel variability, estimated from the standard deviation of the cosine errors of different sensors for each center wavelength, displayed values generally lower than 2% for incidence angles up to 50 degrees and occasionally increasing up to 6% at 80 degrees. Simulations of total downward irradiance measurements, accounting for average angular responses of the investigated radiometers, were made with an accurate radiative transfer code. The estimated errors showed a significant dependence on wavelength, sun zenith, and aerosol optical thickness. For a clear sky maritime atmosphere, these errors displayed values spectrally varying and generally within +/-3%, with extreme values of approximately 4-10% (absolute) at 40-80 degrees sun zenith for the channels at 412 and 443 nm. Schemes for minimizing the cosine errors have also been proposed and discussed.

  8. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  9. Efficient measurement error correction with spatially misaligned data

    PubMed Central

    Szpiro, Adam A.; Sheppard, Lianne; Lumley, Thomas

    2011-01-01

    Association studies in environmental statistics often involve exposure and outcome data that are misaligned in space. A common strategy is to employ a spatial model such as universal kriging to predict exposures at locations with outcome data and then estimate a regression parameter of interest using the predicted exposures. This results in measurement error because the predicted exposures do not correspond exactly to the true values. We characterize the measurement error by decomposing it into Berkson-like and classical-like components. One correction approach is the parametric bootstrap, which is effective but computationally intensive since it requires solving a nonlinear optimization problem for the exposure model parameters in each bootstrap sample. We propose a less computationally intensive alternative termed the “parameter bootstrap” that only requires solving one nonlinear optimization problem, and we also compare bootstrap methods to other recently proposed methods. We illustrate our methodology in simulations and with publicly available data from the Environmental Protection Agency. PMID:21252080

  10. Validity and systematic error in measuring carotenoid consumption with dietary self-report instruments.

    PubMed

    Natarajan, Loki; Flatt, Shirley W; Sun, Xiaoying; Gamst, Anthony C; Major, Jacqueline M; Rock, Cheryl L; Al-Delaimy, Wael; Thomson, Cynthia A; Newman, Vicky A; Pierce, John P

    2006-04-15

    Vegetables and fruits are rich in carotenoids, a group of compounds thought to protect against cancer. Studies of diet-disease associations need valid and reliable instruments for measuring dietary intake. The authors present a measurement error model to estimate the validity (defined as correlation between self-reported intake and "true" intake), systematic error, and reliability of two self-report dietary assessment methods. Carotenoid exposure is measured by repeated 24-hour recalls, a food frequency questionnaire (FFQ), and a plasma marker. The model is applied to 1,013 participants assigned between 1995 and 2000 to the nonintervention arm of the Women's Healthy Eating and Living Study, a randomized trial assessing the impact of a low-fat, high-vegetable/fruit/fiber diet on preventing new breast cancer events. Diagnostics including graphs are used to assess the goodness of fit. The validity of the instruments was 0.44 for the 24-hour recalls and 0.39 for the FFQ. Systematic error accounted for over 22% and 50% of measurement error variance for the 24-hour recalls and FFQ, respectively. The use of either self-report method alone in diet-disease studies could lead to substantial bias and error. Multiple methods of dietary assessment may provide more accurate estimates of true dietary intake.

  11. Comparing Different Accounts of Inversion Errors in Children's Non-Subject Wh-Questions: "What Experimental Data Can Tell Us?"

    ERIC Educational Resources Information Center

    Ambridge, Ben; Rowland, Caroline F.; Theakston, Anna L.; Tomasello, Michael

    2006-01-01

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words ("what," "who," "how" and "why"), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6-4;6. Rates of non-inversion error ("Who…

  12. Detecting correlated errors in state-preparation-and-measurement tomography

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; van Enk, S. J.

    2015-10-01

    Whereas in standard quantum-state tomography one estimates an unknown state by performing various measurements with known devices, and whereas in detector tomography one estimates the positive-operator-valued-measurement elements of a measurement device by subjecting to it various known states, we consider here the case of SPAM (state preparation and measurement) tomography where neither the states nor the measurement device are assumed known. For d -dimensional systems measured by d -outcome detectors, we find there are at most d2(d2-1 ) "gauge" parameters that can never be determined by any such experiment, irrespective of the number of unknown states and unknown devices. For the case d =2 we find gauge-invariant quantities that can be accessed directly experimentally and that can be used to detect and describe SPAM errors. In particular, we identify conditions whose violations detect the presence of correlations between SPAM errors. From the perspective of SPAM tomography, standard quantum-state tomography and detector tomography are protocols that fix the gauge parameters through the assumption that some set of fiducial measurements is known or that some set of fiducial states is known, respectively.

  13. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  14. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

    SciTech Connect

    PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

    1999-03-29

    All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

  15. Effects of vibration measurement error on remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    Sun, Xuan; Wei, Zhang; Zhi, Xiyang

    2016-10-01

    Satellite vibrations would lead to image motion blur. Since the vibration isolators cannot fully suppress the influence of vibrations, image restoration methods are usually adopted, and the vibration characteristics of imaging system are usually required as algorithm inputs for better restoration results, making the vibration measurement error strongly connected to the final outcome. If the measurement error surpass a certain range, the restoration may not be implemented successfully. Therefore it is important to test the applicable scope of restoration algorithms and control the vibrations within the range, on the other hand, if the algorithm is robust, then the requirements for both vibration isolator and vibration detector can be lowered and thus less financial cost is needed. In this paper, vibration-induced degradation is first analyzed, based on which the effects of measurement error on image restoration are further analyzed. The vibration-induced degradation is simulated using high resolution satellite images and then the applicable working condition of typical restoration algorithms are tested with simulation experiments accordingly. The research carried out in this paper provides a valuable reference for future satellite design which plan to implement restoration algorithms.

  16. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  17. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-18

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error.

  18. Accounting for spatial correlations of the observation errors with Ensemble Kalman filters

    NASA Astrophysics Data System (ADS)

    Cosme, Emmanuel; Jean-Michel, Brankart; Clément, Ubelmann; Jacques, Verron; Pierre, Brasseur

    2013-04-01

    The standard Kalman filter observational update requires the inversion of the innovation error covariance matrix, what is often impractical. Most implementations of the Ensemble Kalman filter circumvent this difficulty assuming the diagonality of the observation error covariance matrix, what makes the analysis calculation numerically tractable. However, when observation errors are actually correlated spatially, such hypothesis leads to an inappropriate use of observations. Experiments show that the analysis state error variances yielded by the Ensemble Kalman filter can be severely underestimated. In this presentation, we describe a parameterization of the observation error covariance matrix which preserves its diagonal shape, but represents a simple first order autoregressive correlation structure of the observation errors. This parameterization is based upon an augmentation of the observation vector with gradients of observations. Numerical applications to ocean altimetry show the detrimental effects of specifying a diagonal matrix when observations errors are correlated, and how the new parameterization not only removes the detrimental effects of correlations, but also makes use of these correlations to improve the data assimilation products.

  19. Motion measurement errors and autofocus in bistatic SAR.

    PubMed

    Rigling, Brian D; Moses, Randolph L

    2006-04-01

    This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus.

  20. Error reduction techniques for measuring long synchrotron mirrors

    SciTech Connect

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.

  1. More systematic errors in the measurement of power spectral density

    NASA Astrophysics Data System (ADS)

    Mack, Chris A.

    2015-07-01

    Power spectral density (PSD) analysis is an important part of understanding line-edge and linewidth roughness in lithography. But uncertainty in the measured PSD, both random and systematic, complicates interpretation. It is essential to understand and quantify the sources of the measured PSD's uncertainty and to develop mitigation strategies. Both analytical derivations and simulations of rough features are used to evaluate data window functions for reducing spectral leakage and to understand the impact of data detrending on biases in PSD, autocovariance function (ACF), and height-to-height covariance function measurement. A generalized Welch window was found to be best among the windows tested. Linear detrending for line-edge roughness measurement results in underestimation of the low-frequency PSD and errors in the ACF and height-to-height covariance function. Measuring multiple edges per scanning electron microscope image reduces this detrending bias.

  2. Quantifying soil CO2 respiration measurement error across instruments

    NASA Astrophysics Data System (ADS)

    Creelman, C. A.; Nickerson, N. R.; Risk, D. A.

    2010-12-01

    A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.

  3. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  4. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  5. Data Reconciliation and Gross Error Detection: A Filtered Measurement Test

    SciTech Connect

    Himour, Y.

    2008-06-12

    Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.

  6. Moving Beyond "Good/Bad" Student Accountability Measures: Multiple Perspectives of Accountability.

    ERIC Educational Resources Information Center

    Capper, Colleen A.; Hafner, Madeline M.; Keyes, Maureen W.

    2001-01-01

    Examines three student accountability measures (standardized tests, performance-based assessment, and structural assessment) through two different theoretical perspectives: structural functionalism and feminist poststructuralism. Educators can use various kinds of assessments in ways that maintain the status quo or support equity and justice for…

  7. A Measurement Control Program for Nuclear Material Accounting

    SciTech Connect

    Brouns, R. J.; Roberts, F. P.; Merrill, J. A.; Brown, W. B.

    1980-06-01

    A measurement control program for nuclear material accounting monitors and controls the quality of the measurements of special nuclear material that are involved in material balances. The quality is monitored by collecting data from which the current precision and accuracy of measurements can be evaluated. The quality is controlled by evaluations, reviews, and other administrative measures for control of selection or design of facilities. equipment and measurement methods and the training and qualification of personnel who perform SNM measurements. This report describes the most important elements of a program by which management can monitor and control measurement quality.

  8. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I-Model Development.

    PubMed

    Calvo, Roque; D'Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-09-29

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM's behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  9. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  10. Accounting for People: Can Business Measure Human Value?

    ERIC Educational Resources Information Center

    Workforce Economics, 1997

    1997-01-01

    Traditional business practice undervalues human capital, and most conventional accounting models reflect this inclination. The argument for more explicit measurements of human resources is simple: Improved measurement of human resources will lead to more rational and productive choices about managing human resources. The business community is…

  11. Personal Accountability in Education: Measure Development and Validation

    ERIC Educational Resources Information Center

    Rosenblatt, Zehava

    2017-01-01

    Purpose: The purpose of this paper, three-study research project, is to establish and validate a two-dimensional scale to measure teachers' and school administrators' accountability disposition. Design/methodology/approach: The scale items were developed in focus groups, and the final measure was tested on various samples of Israeli teachers and…

  12. 50 CFR 648.24 - Fishery closures and accountability measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Management Measures for the Atlantic Mackerel, Squid, and Butterfish Fisheries § 648.24 Fishery closures and accountability measures. (a) Fishery closure procedures—(1) Longfin squid. NMFS shall close the directed fishery in the EEZ for longfin squid when the Regional Administrator projects that 90 percent of the...

  13. 50 CFR 648.24 - Fishery closures and accountability measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Management Measures for the Atlantic Mackerel, Squid, and Butterfish Fisheries § 648.24 Fishery closures and accountability measures. (a) Fishery closure procedures—(1) Longfin squid. NMFS shall close the directed fishery in the EEZ for longfin squid when the Regional Administrator projects that 90 percent of the...

  14. Validation and Error Characterization for the Global Precipitation Measurement

    NASA Technical Reports Server (NTRS)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  15. Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors

    NASA Astrophysics Data System (ADS)

    Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.

    2016-06-01

    Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.

  16. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.

    2013-08-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable

  17. Simulation of error in optical radar range measurements.

    PubMed

    Der, S; Redman, B; Chellappa, R

    1997-09-20

    We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.

  18. Examples of Detecting Measurement Errors with the QCRad VAP

    SciTech Connect

    Shi, Yan; Long, Charles N.

    2005-07-30

    The QCRad VAP is being developed to assess the data quality for the ARM radiation data collected at the Extended and ARCS facilities. In this study, we processed one year of radiation data, chosen at random, for each of the twenty SGP Extended Facilities to aid in determining the user configurable limits for the SGP sites. By examining yearly summary plots of the radiation data and the various test limits, we can show that the QCRad VAP is effective in identifying and detecting many different types of measurement errors. Examples of the analysis results will be shown in this poster presentation.

  19. Examiner error in curriculum-based measurement of oral reading.

    PubMed

    Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K

    2014-08-01

    Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.

  20. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  1. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  2. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGES

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  3. Plain film measurement error in acute displaced midshaft clavicle fractures

    PubMed Central

    Archer, Lori Anne; Hunt, Stephen; Squire, Daniel; Moores, Carl; Stone, Craig; O’Dea, Frank; Furey, Andrew

    2016-01-01

    Background Clavicle fractures are common and optimal treatment remains controversial. Recent literature suggests operative fixation of acute displaced mid-shaft clavicle fractures (DMCFs) shortened more than 2 cm improves outcomes. We aimed to identify correlation between plain film and computed tomography (CT) measurement of displacement and the inter- and intraobserver reliability of repeated radiographic measurements. Methods We obtained radiographs and CT scans of patients with acute DMCFs. Three orthopedic staff and 3 residents measured radiographic displacement at time zero and 2 weeks later. The CT measurements identified absolute shortening in 3 dimensions (by subtracting the length of the fractured from the intact clavicle). We then compared shortening measured on radiographs and shortening measured in 3 dimensions on CT. Interobserver and intraobserver reliability were calculated. Results We reviewed the fractures of 22 patients. Bland–Altman repeatability coefficient calculations indicated that radiograph and CT measurements of shortening could not be correlated owing to an unacceptable amount of measurement error (6 cm). Interobserver reliability for plain radiograph measurements was excellent (Cronbach α = 0.90). Likewise, intraobserver reliabilities for plain radiograph measurements as calculated with paired t tests indicated excellent correlation (p > 0.05 in all but 1 observer [p = 0.04]). Conclusion To establish shortening as an indication for DMCF fixation, reliable measurement tools are required. The low correlation between plain film and CT measurements we observed suggests further research is necessary to establish what imaging modality reliably predicts shortening. Our results indicate weak correlation between radiograph and CT measurement of acute DMCF shortening. PMID:27438054

  4. Measurement of hyoid and laryngeal displacement in video fluoroscopic swallowing studies: variability, reliability, and measurement error.

    PubMed

    Sia, Isaac; Carvajal, Pamela; Carnaby-Mann, Giselle D; Crary, Michael A

    2012-06-01

    Video fluoroscopy is commonly used in the study of swallowing kinematics. However, various procedures used in linear measurements obtained from video fluoroscopy may contribute to increased variability or measurement error. This study evaluated the influence of calibration referent and image rotation on measurement variability for hyoid and laryngeal displacement during swallowing. Inter- and intrarater reliabilities were also estimated for hyoid and laryngeal displacement measurements across conditions. The use of different calibration referents did not contribute significantly to variability in measures of hyoid and laryngeal displacement but image rotation affected horizontal measures for both structures. Inter- and intrarater reliabilities were high. Using the 95% confidence interval as the error index, measurement error was estimated to range from 2.48 to 3.06 mm. These results address procedural decisions for measuring hyoid and laryngeal displacement in video fluoroscopic swallowing studies.

  5. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  6. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  7. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  8. Error correction for Moiré based creep measurement system

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Harding, Kevin G.; Nieters, Edward J.; Tait, Robert W.; Hasz, Wayne C.; Piche, Nicole

    2014-05-01

    Due to the high temperatures and stresses present in the high-pressure section of a gas turbine, the airfoils experience creep or radial stretching. Nowadays manufacturers are putting in place condition-based maintenance programs in which the condition of individual components is assessed to determine their remaining lives. To accurately track this creep effect and predict the impact on part life, the ability to accurately assess creep has become an important engineering challenge. One approach for measuring creep is using moiré imaging. Using pad-print technology, a grating pattern can be directly printed on a turbine bucket, and it compares against a reference pattern built in the creep measurement system to create moiré interference pattern. The authors assembled a creep measurement prototype for this application. By measuring the frequency change of the moiré fringes, it is then possible to determine the local creep distribution. However, since the sensitivity requirement for the creep measurement is very stringent (0.1 micron), the measurement result can be easily offset due to optical system aberrations, tilts and magnification. In this paper, a mechanical specimen subjected to a tensile test to induce plastic deformation up to 4% in the gage was used to evaluate the system. The results show some offset compared to the readings from a strain gage and an extensometer. By using a new grating pattern with two subset patterns, it was possible to correct these offset errors.

  9. The SIMEX approach to measurement error correction in meta-analysis with baseline risk as covariate.

    PubMed

    Guolo, A

    2014-05-30

    This paper investigates the use of SIMEX, a simulation-based measurement error correction technique, for meta-analysis of studies involving the baseline risk of subjects in the control group as explanatory variable. The approach accounts for the measurement error affecting the information about either the outcome in the treatment group or the baseline risk available from each study, while requiring no assumption about the distribution of the true unobserved baseline risk. This robustness property, together with the feasibility of computation, makes SIMEX very attractive. The approach is suggested as an alternative to the usual likelihood analysis, which can provide misleading inferential results when the commonly assumed normal distribution for the baseline risk is violated. The performance of SIMEX is compared to the likelihood method and to the moment-based correction through an extensive simulation study and the analysis of two datasets from the medical literature.

  10. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  11. The effect of systematic errors on the hybridization of optical critical dimension measurements

    NASA Astrophysics Data System (ADS)

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.

    2015-06-01

    In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.

  12. Assessing and accounting for the effects of model error in Bayesian solutions to hydrogeophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Koepke, C.; Irving, J.; Roubinet, D.

    2014-12-01

    Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through

  13. Measurement and the Professions: Lessons from Accounting, Law, and Medicine.

    ERIC Educational Resources Information Center

    Nowakowski, Jeri; And Others

    1983-01-01

    This detailed analysis of the role of measurement across the three professions of law, medicine, and accounting offers insights into entry-level and performance barriers in occupations that rely on certification, licensing, and regulation to influence performance, ethics, and training. (Author/PN)

  14. 50 CFR 648.24 - Fishery closures and accountability measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Fishery closures and accountability measures. 648.24 Section 648.24 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED...

  15. Measurement Issues in the Design of State Accountability Systems.

    ERIC Educational Resources Information Center

    Stevens, Joseph; Estrada, Susan; Parkes, Jay

    The practices, policies, and procedures used in all 50 states for evaluating school and school district effectiveness were examined, with a focus on the study of methodological and measurement issues in the collection, analysis, and reporting of information for accountability purposes. Data were collected through computerized literature and Web…

  16. Adapting Accountability Systems to the Limitations of Educational Measurement

    ERIC Educational Resources Information Center

    Kane, Michael

    2015-01-01

    Michael Kane writes in this article that he is in more or less complete agreement with Professor Koretz's characterization of the problem outlined in the paper published in this issue of "Measurement." Kane agrees that current testing practices are not adequate for test-based accountability (TBA) systems, but he writes that he is far…

  17. 50 CFR 660.509 - Accountability measures (season closures).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Accountability measures (season closures). 660.509 Section 660.509 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED) FISHERIES OFF WEST COAST...

  18. Regional distribution of measurement error in diffusion tensor imaging.

    PubMed

    Marenco, Stefano; Rawlings, Robert; Rohde, Gustavo K; Barnett, Alan S; Honea, Robyn A; Pierpaoli, Carlo; Weinberger, Daniel R

    2006-06-30

    The characterization of measurement error is critical in assessing the significance of diffusion tensor imaging (DTI) findings in longitudinal and cohort studies of psychiatric disorders. We studied 20 healthy volunteers, each one scanned twice (average interval between scans of 51 +/- 46.8 days) with a single shot echo planar DTI technique. Intersession variability for fractional anisotropy (FA) and Trace (D) was represented as absolute variation (standard deviation within subjects: SDw), percent coefficient of variation (CV) and intra-class correlation coefficient (ICC). The values from the two sessions were compared for statistical significance with repeated measures analysis of variance or a non-parametric equivalent of a paired t-test. The results showed good reproducibility for both FA and Trace (CVs below 10% and ICCs at or above 0.70 in most regions of interest) and evidence of systematic global changes in Trace between scans. The regional distribution of reproducibility described here has implications for the interpretation of regional findings and for rigorous pre-processing. The regional distribution of reproducibility measures was different for SDw, CV and ICC. Each one of these measures reveals complementary information that needs to be taken into consideration when performing statistical operations on groups of DT images.

  19. Materials accounting in a fast-breeder-reactor fuels-reprocessing facility: optimal allocation of measurement uncertainties

    SciTech Connect

    Dayem, H.A.; Ostenak, C.A.; Gutmacher, R.G.; Kern, E.A.; Markin, J.T.; Martinez, D.P.; Thomas, C.C. Jr.

    1982-07-01

    This report describes the conceptual design of a materials accounting system for the feed preparation and chemical separations processes of a fast breeder reactor spent-fuel reprocessing facility. For the proposed accounting system, optimization techniques are used to calculate instrument measurement uncertainties that meet four different accounting performance goals while minimizing the total development cost of instrument systems. We identify instruments that require development to meet performance goals and measurement uncertainty components that dominate the materials balance variance. Materials accounting in the feed preparation process is complicated by large in-process inventories and spent-fuel assembly inputs that are difficult to measure. To meet 8 kg of plutonium abrupt and 40 kg of plutonium protracted loss-detection goals, materials accounting in the chemical separations process requires: process tank volume and concentration measurements having a precision less than or equal to 1%; accountability and plutonium sample tank volume measurements having a precision less than or equal to 0.3%, a shortterm correlated error less than or equal to 0.04%, and a long-term correlated error less than or equal to 0.04%; and accountability and plutonium sample tank concentration measurements having a precision less than or equal to 0.4%, a short-term correlated error less than or equal to 0.1%, and a long-term correlated error less than or equal to 0.05%. The effects of process design on materials accounting are identified. Major areas of concern include the voloxidizer, the continuous dissolver, and the accountability tank.

  20. Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    1999-01-01

    This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…

  1. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  2. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    NASA Astrophysics Data System (ADS)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  3. Error separation technique for measuring aspheric surface based on dual probes

    NASA Astrophysics Data System (ADS)

    Wei, Zhong-wei; Jing, Hong-wei; Kuang, Long; Wu, Shi-bin

    2013-09-01

    In this paper, we present an error separation method based on dual probes for the swing arm profilometer to calibrate the rotary table errors. Two probes and the rotation axis of swinging arm are in a plane. The scanning tracks cross each other as both probes scan the mirror edge to edge. Since the surface heights should ideally be the same at these scanning crossings, this crossings height information can be used to calibrate the rotary table errors. But the crossings height information contains the swing arm air bearing errors and measurement errors of probes. The errors seriously affect the correction accuracy of rotary table errors. The swing arm air bearing errors and measurement errors of probes are randomly distributed, we use least square method to remove these errors. In this paper, we present the geometry of the dual probe swing arm profilometer system, and the profiling pattern made by both probes. We analyze the influence the probe separation has on the measurement results. The algorithm for stitching together the scans into a surface is also presented. The difference of the surface heights at the crossings of the adjacent scans is used to find a transformation that describes the rotary table errors and then to correct for the errors. To prove the error separation method based on a dual probe can successfully calibrate the rotary table errors, we establish SAP error model and simulate the effect of the error separation method based on a dual probe on calibrating the rotary table errors.

  4. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  5. Stochastic thermodynamics based on incomplete information: generalized Jarzynski equality with measurement errors with or without feedback

    NASA Astrophysics Data System (ADS)

    Wächtler, Christopher W.; Strasberg, Philipp; Brandes, Tobias

    2016-11-01

    In the derivation of fluctuation relations, and in stochastic thermodynamics in general, it is tacitly assumed that we can measure the system perfectly, i.e., without measurement errors. We here demonstrate for a driven system immersed in a single heat bath, for which the classic Jarzynski equality < {{{e}}}-β (W-{{Δ }F)}> =1 holds, how to relax this assumption. Based on a general measurement model akin to Bayesian inference we derive a general expression for the fluctuation relation of the measured work and we study the case of an overdamped Brownian particle and of a two-level system in particular. We then generalize our results further and incorporate feedback in our description. We show and argue that, if measurement errors are fully taken into account by the agent who controls and observes the system, the standard Jarzynski-Sagawa-Ueda relation should be formulated differently. We again explicitly demonstrate this for an overdamped Brownian particle and a two-level system where the fluctuation relation of the measured work differs significantly from the efficacy parameter introduced by Sagawa and Ueda. Instead, the generalized fluctuation relation under feedback control, < {{{e}}}-β (W-{{Δ }F)-I}> =1, holds only for a superobserver having perfect access to both the system and detector degrees of freedom, independently of whether or not the detector yields a noisy measurement record and whether or not we perform feedback.

  6. Multiple reflections in a photoelastic modulator: errors in polarization measurement

    NASA Astrophysics Data System (ADS)

    Gemeiner, P.; Yang, D.; Canit, J. C.

    1996-09-01

    The use of a coherent light source (laser) can lead to significant errors when measurements of optical activity, magneto optical Kerr rotation, dichroism or ellipsometric parameters are down with a photoelastic modulator. In particular, a phenomenon of interferences occurs between beams arising from multiple reflections in the modulator. These interferences give rise to parasitic effects which depend on the one hand on the characteristics of the modulator and on the other hand on the wavelength of the light. A variation of temperature causes a modification of those artefacts. These have been noticed experimentally and their amplitude is in good agreement with theoretical predictions, based on a calculation of interferences. The amplitude of an artefact may reach one degree of angle in case of optical activity and is equal to five thousandth in case of measurement of a dichroism. We have shown experimentally that these effects can be cancelled by inclining the modulator with respect to the axis of the light beam or by using a new modulator with a trapezoidal section.

  7. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    NASA Technical Reports Server (NTRS)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  8. #2 - An Empirical Assessment of Exposure Measurement Error and Effect Attenuation in Bi-Pollutant Epidemiologic Models

    EPA Science Inventory

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...

  9. Quantifying the sources of error in measurements of urine activity

    SciTech Connect

    Mozley, P.D.; Kim, H.J.; McElgin, W.

    1994-05-01

    Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.

  10. On the reliability and standard errors of measurement of contrast measures from the D-KEFS.

    PubMed

    Crawford, John R; Sutherland, David; Garthwaite, Paul H

    2008-11-01

    A formula for the reliability of difference scores was used to estimate the reliability of Delis-Kaplan Executive Function System (D-KEFS; Delis et al., 2001) contrast measures from the reliabilities and correlations of their components. In turn these reliabilities were used to calculate standard errors of measurement. The majority of contrast measures had low reliabilities: of the 51 reliability coefficients calculated in the present study, none exceeded 0.7 and hence all failed to meet any of the criteria for acceptable reliability proposed by various experts in psychological measurement. The mean reliability of the contrast scores was 0.27, the median reliability was 0.30. The standard errors of measurement were large and, in many cases, equaled or were only marginally smaller than the contrast scores' standard deviations. The results suggest that, at present, D-KEFS contrast measures should not be used in neuropsychological decision making.

  11. Evidence, exaggeration, and error in historical accounts of chaparral wildfires in California.

    PubMed

    Goforth, Brett R; Minnich, Richard A

    2007-04-01

    For more than half a century, ecologists and historians have been integrating the contemporary study of ecosystems with data gathered from historical sources to evaluate change over broad temporal and spatial scales. This approach is especially useful where ecosystems were altered before formal study as a result of natural resources management, land development, environmental pollution, and climate change. Yet, in many places, historical documents do not provide precise information, and pre-historical evidence is unavailable or has ambiguous interpretation. There are similar challenges in evaluating how the fire regime of chaparral in California has changed as a result of fire suppression management initiated at the beginning of the 20th century. Although the firestorm of October 2003 was the largest officially recorded in California (approximately 300,000 ha), historical accounts of pre-suppression wildfires have been cited as evidence that such a scale of burning was not unprecedented, suggesting the fire regime and patch mosaic in chaparral have not substantially changed. We find that the data do not support pre-suppression megafires, and that the impression of large historical wildfires is a result of imprecision and inaccuracy in the original reports, as well as a parlance that is beset with hyperbole. We underscore themes of importance for critically analyzing historical documents to evaluate ecological change. A putative 100 mile long by 10 mile wide (160 x 16 km) wildfire reported in 1889 was reconstructed to an area of chaparral approximately 40 times smaller by linking local accounts to property tax records, voter registration rolls, claimed insurance, and place names mapped with a geographical information system (GIS) which includes data from historical vegetation surveys. We also show that historical sources cited as evidence of other large chaparral wildfires are either demonstrably inaccurate or provide anecdotal information that is immaterial in the

  12. Water Accounting Plus (WA+) - a water accounting procedure for complex river basins based on satellite measurements

    NASA Astrophysics Data System (ADS)

    Karimi, P.; Bastiaanssen, W. G. M.; Molden, D.

    2012-11-01

    Coping with the issue of water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links hydrological flows to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper we introduce Water Accounting Plus (WA+), which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use on the water cycle is described explicitly by defining land use groups with common characteristics. Analogous to financial accounting, WA+ presents four sheets including (i) a resource base sheet, (ii) a consumption sheet, (iii) a productivity sheet, and (iv) a withdrawal sheet. Every sheet encompasses a set of indicators that summarize the overall water resources situation. The impact of external (e.g. climate change) and internal influences (e.g. infrastructure building) can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used for 3 out of the 4 sheets, but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.

  13. On error sources during airborne measurements of the ambient electric field

    NASA Technical Reports Server (NTRS)

    Evteev, B. F.

    1991-01-01

    The principal sources of errors during airborne measurements of the ambient electric field and charge are addressed. Results of their analysis are presented for critical survey. It is demonstrated that the volume electric charge has to be accounted for during such measurements, that charge being generated at the airframe and wing surface by droplets of clouds and precipitation colliding with the aircraft. The local effect of that space charge depends on the flight regime (air speed, altitude, particle size, and cloud elevation). Such a dependence is displayed in the relation between the collector conductivity of the aircraft discharging circuit - on one hand, and the sum of all the residual conductivities contributing to aircraft discharge - on the other. Arguments are given in favor of variability in the aircraft electric capacitance. Techniques are suggested for measuring from factors to describe the aircraft charge.

  14. Error analysis of Raman differential absorption lidar ozone measurements in ice clouds.

    PubMed

    Reichardt, J

    2000-11-20

    A formalism for the error treatment of lidar ozone measurements with the Raman differential absorption lidar technique is presented. In the presence of clouds wavelength-dependent multiple scattering and cloud-particle extinction are the main sources of systematic errors in ozone measurements and necessitate a correction of the measured ozone profiles. Model calculations are performed to describe the influence of cirrus and polar stratospheric clouds on the ozone. It is found that it is sufficient to account for cloud-particle scattering and Rayleigh scattering in and above the cloud; boundary-layer aerosols and the atmospheric column below the cloud can be neglected for the ozone correction. Furthermore, if the extinction coefficient of the cloud is ?0.1 km(-1), the effect in the cloud is proportional to the effective particle extinction and to a particle correction function determined in the limit of negligible molecular scattering. The particle correction function depends on the scattering behavior of the cloud particles, the cloud geometric structure, and the lidar system parameters. Because of the differential extinction of light that has undergone one or more small-angle scattering processes within the cloud, the cloud effect on ozone extends to altitudes above the cloud. The various influencing parameters imply that the particle-related ozone correction has to be calculated for each individual measurement. Examples of ozone measurements in cirrus clouds are discussed.

  15. Implications of Three Causal Models for the Measurement of Halo Error.

    ERIC Educational Resources Information Center

    Fisicaro, Sebastiano A.; Lance, Charles E.

    1990-01-01

    Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)

  16. Properties of a Proposed Approximation to the Standard Error of Measurement.

    ERIC Educational Resources Information Center

    Nitko, Anthony J.

    An approximation formula for the standard error of measurement was recently proposed by Garvin. The properties of this approximation to the standard error of measurement are described in this paper and illustrated with hypothetical data. It is concluded that the approximation is a systematic overestimate of the standard error of measurement…

  17. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  18. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    SciTech Connect

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.

  19. Branch-based model for the diameters of the pulmonary airways: accounting for departures from self-consistency and registration errors.

    PubMed

    Neradilek, Moni B; Polissar, Nayak L; Einstein, Daniel R; Glenny, Robb W; Minard, Kevin R; Carson, James P; Jiao, Xiangmin; Jacob, Richard E; Cox, Timothy C; Postlethwait, Edward M; Corley, Richard A

    2012-06-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis.

  20. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

    PubMed Central

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-01-01

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385

  1. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

    PubMed

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-05-19

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  2. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    PubMed

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset.

  3. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    PubMed

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  4. Accounting for location error in Kalman filters: integrating animal borne sensor data into assimilation schemes.

    PubMed

    Sengupta, Aritra; Foster, Scott D; Patterson, Toby A; Bravington, Mark

    2012-01-01

    Data assimilation is a crucial aspect of modern oceanography. It allows the future forecasting and backward smoothing of ocean state from the noisy observations. Statistical methods are employed to perform these tasks and are often based on or related to the Kalman filter. Typically Kalman filters assumes that the locations associated with observations are known with certainty. This is reasonable for typical oceanographic measurement methods. Recently, however an alternative and abundant source of data comes from the deployment of ocean sensors on marine animals. This source of data has some attractive properties: unlike traditional oceanographic collection platforms, it is relatively cheap to collect, plentiful, has multiple scientific uses and users, and samples areas of the ocean that are often difficult of costly to sample. However, inherent uncertainty in the location of the observations is a barrier to full utilisation of animal-borne sensor data in data-assimilation schemes. In this article we examine this issue and suggest a simple approximation to explicitly incorporate the location uncertainty, while staying in the scope of Kalman-filter-like methods. The approximation stems from a Taylor-series approximation to elements of the updating equation.

  5. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  6. Manifest variable path analysis: potentially serious and misleading consequences due to uncorrected measurement error.

    PubMed

    Cole, David A; Preacher, Kristopher J

    2014-06-01

    Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.

  7. Source and magnitude of error in an inexpensive image-based water level measurement system

    NASA Astrophysics Data System (ADS)

    Gilmore, Troy E.; Birgand, François; Chapman, Kenneth W.

    2013-07-01

    Recent technological advances have opened the possibility to use webcams and images as part of the environmental monitoring arsenal. The potential sources and magnitude of uncertainties inherent to an image-based water level measurement system are evaluated in an experimental design in the laboratory. Sources of error investigated include image resolution, lighting effects, perspective, lens distortion and water meniscus. Image resolution and meniscus were found to weigh the most in the overall uncertainty of this system. Image distortion, although largely taken into account by the software developed, may also significantly add to uncertainty. Results suggest that "flat" images with little distortion are preferable. After correction for the water meniscus, images captured with a camera (12 mm or 16 mm focal lengths) positioned 4-7 m from the water level edge have the potential to yield water level measurements within ±3 mm when using this technique.

  8. Determination of the resonant harmonics of the error field from dynamic magnetic measurements in a tokamak

    SciTech Connect

    Pustovitov, V. D.

    2008-01-15

    The possibility is discussed of determining the amplitude and phase of a static resonant error field in a tokamak by means of dynamic magnetic measurements. The method proposed assumes measuring the plasma response to a varying external helical magnetic field with a small (a few gauss) amplitude. The case is considered in which the plasma is probed by square pulses with a duration much longer than the time of the transition process. The plasma response is assumed to be linear, with a proportionality coefficient being dependent on the plasma state. The analysis is carried out in a standard cylindrical approximation. The model is based on Maxwell's equations and Ohm's law and is thus capable of accounting for the interaction of large-scale modes with the conducting wall of the vacuum chamber. The method can be applied to existing tokamaks.

  9. Estimating Measurement Error of the Patient Activation Measure for Respondents with Partially Missing Data.

    PubMed

    Linden, Ariel

    2015-01-01

    The patient activation measure (PAM) is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve), using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE) for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum) and 44.4% (maximum). Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.

  10. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  11. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    NASA Astrophysics Data System (ADS)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  12. Effects of measurement error on the strength of concentration-response relationships in aquatic toxicology.

    PubMed

    Sonderegger, Derek L; Wang, Haonan; Huang, Yao; Clements, William H

    2009-10-01

    The effect that measurement error of predictor variables has on regression inference is well known in the statistical literature. However, the influence of measurement error on the ability to quantify relationships between chemical stressors and biological responses has received little attention in ecotoxicology. We present a common data-collection scenario and demonstrate that the relationship between explanatory and response variables is consistently underestimated when measurement error is ignored. A straightforward extension of the regression calibration method is to use a nonparametric method to smooth the predictor variable with respect to another covariate (e.g., time) and using the smoothed predictor to estimate the response variable. We conducted a simulation study to compare the effectiveness of the proposed method to the naive analysis that ignores measurement error. We conclude that the method satisfactorily addresses the problem when measurement error is moderate to large, and does not result in a noticeable loss of power in the case where measurement error is absent.

  13. Automated suppression of errors in LTP-II slope measurements with x-ray optics. Part1: Review of LTP errors and methods for the error reduction

    SciTech Connect

    Ali, Zulfiqar; Yashchuk, Valeriy V.

    2011-05-11

    Systematic error and instrumental drift are the major limiting factors of sub-microradian slope metrology with state-of-the-art x-ray optics. Significant suppression of the errors can be achieved by using an optimal measurement strategy suggested in [Rev. Sci. Instrum. 80, 115101 (2009)]. With this series of LSBL Notes, we report on development of an automated, kinematic, rotational system that provides fully controlled flipping, tilting, and shifting of a surface under test. The system is integrated into the Advanced Light Source long trace profiler, LTP-II, allowing for complete realization of the advantages of the optimal measurement strategy method. We provide details of the system’s design, operational control and data acquisition. The high performance of the system is demonstrated via the results of high precision measurements with a spherical test mirror.

  14. Measurements of Intrahost Viral Diversity Are Extremely Sensitive to Systematic Errors in Variant Calling

    PubMed Central

    McCrone, John T.

    2016-01-01

    ABSTRACT With next-generation sequencing technologies, it is now feasible to efficiently sequence patient-derived virus populations at a depth of coverage sufficient to detect rare variants. However, each sequencing platform has characteristic error profiles, and sample collection, target amplification, and library preparation are additional processes whereby errors are introduced and propagated. Many studies account for these errors by using ad hoc quality thresholds and/or previously published statistical algorithms. Despite common usage, the majority of these approaches have not been validated under conditions that characterize many studies of intrahost diversity. Here, we use defined populations of influenza virus to mimic the diversity and titer typically found in patient-derived samples. We identified single-nucleotide variants using two commonly employed variant callers, DeepSNV and LoFreq. We found that the accuracy of these variant callers was lower than expected and exquisitely sensitive to the input titer. Small reductions in specificity had a significant impact on the number of minority variants identified and subsequent measures of diversity. We were able to increase the specificity of DeepSNV to >99.95% by applying an empirically validated set of quality thresholds. When applied to a set of influenza virus samples from a household-based cohort study, these changes resulted in a 10-fold reduction in measurements of viral diversity. We have made our sequence data and analysis code available so that others may improve on our work and use our data set to benchmark their own bioinformatics pipelines. Our work demonstrates that inadequate quality control and validation can lead to significant overestimation of intrahost diversity. IMPORTANCE Advances in sequencing technology have made it feasible to sequence patient-derived viral samples at a level sufficient for detection of rare mutations. These high-throughput, cost-effective methods are revolutionizing

  15. Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui

    2017-01-01

    Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.

  16. Errors in scatterometer-radiometer wind measurement due to rain

    NASA Technical Reports Server (NTRS)

    Moore, R. K.; Chaudhry, A. H.; Birrer, I. J.

    1983-01-01

    The behavior of radiometer corrections for the scatterometer is investigated by simulating simple situations using footprint sizes comparable with those used in the SEASAT-1 experiment and also actual footprints and rain rates from a hurricane observed by the SEASAT-1 system. The effects on correction due to attenuation and wind speed gradients are examined independently and jointly. It is shown that the error in the wind-speed estimate can be as large as 200% at higher wind speeds. The worst error occurs when the scatterometer footprint overlaps two or more radiometer footprints and the attenuation in the scatterometer footprint differs greatly from those in parts of the radiometer footprints. This problem could be overcome by using a true radiometer-scatterometer system having identical coincident footprints comparable in size with typical rain cells.

  17. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    SciTech Connect

    Thomas, Edward V.; Stork, Christopher L.; Mattingly, John K.

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  18. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441

  19. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1980-01-01

    Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.

  20. Improving Oncology Quality Measurement in Accountable Care: Filling Gaps with Cross-Cutting Measures.

    PubMed

    Valuck, Tom; Blaisdell, David; Dugan, Donna P; Westrich, Kimberly; Dubois, Robert W; Miller, Robert S; McClellan, Mark

    2017-02-01

    Payment for health care services, including oncology services, is shifting from volume-based fee-for-service to value-based accountable care. The objective of accountable care is to support providers with flexibility and resources to reform care delivery, accompanied by accountability for maintaining or improving outcomes while lowering costs. These changes depend on health care payers, systems, physicians, and patients having meaningful measures to assess care delivery and outcomes and to balance financial incentives for lowering costs while providing greater value. Gaps in accountable care measure sets may cause missed signals of problems in care and missed opportunities for improvement. Measures to balance financial incentives may be particularly important for oncology, where high cost and increasingly targeted diagnostics and therapeutics intersect with the highly complex and heterogeneous needs and preferences of cancer patients. Moreover, the concept of value in cancer care, defined as the measure of outcomes achieved per costs incurred, is rarely incorporated into performance measurement. This article analyzes gaps in oncology measures in accountable care, discusses challenging measurement issues, and offers strategies for improving oncology measurement. Discern Health analyzed gaps in accountable care measure sets for 10 cancer conditions that were selected based on incidence and prevalence; impact on cost and mortality; a diverse range of high-cost diagnostic procedures and treatment modalities (e.g., genomic tumor testing, molecularly targeted therapies, and stereotactic radiotherapy); and disparities or performance gaps in patient care. We identified gaps by comparing accountable care set measures with high-priority measurement opportunities derived from practice guidelines developed by the National Comprehensive Cancer Network and other oncology specialty societies. We found significant gaps in accountable care measure sets across all 10 conditions. For

  1. Analyses of assumptions and errors in the calculation of stomatal conductance from sap flux measurements.

    PubMed

    Ewers, Brent E.; Oren, Ram

    2000-05-01

    We analyzed assumptions and measurement errors in estimating canopy transpiration (E(L)) from sap flux (J(S)) measured with Granier-type sensors, and in calculating canopy stomatal conductance (G(S)) from E(L) and vapor pressure deficit (D). The study was performed in 12-year-old Pinus taeda L. stands with a wide range in leaf area index (L) and growth rate. No systematic differences in J(S) were found between the north and south sides of trees. However, J(S) in xylem between 20 and 40 mm from the cambium was 50 and 39% of J(S) in the outer 20-mm band of xylem in slow- and fast-growing trees, respectively. Sap flux measured in stems did not lag J(S) measured in branches, and time and frequency domain analyses of time series indicated that variability in J(S) in stems and branches is mostly explained by variation in D. Therefore, J(S) was used to estimate transpiration, after accounting for radial patterns. There was no difference between D and leaf-to-air vapor pressure gradient, and D did not have a vertical profile in stands of either low or high L suggesting a strong canopy-atmosphere coupling. Therefore, D estimated at one point in the canopy can be used to calculate G(S) in such stands. Given the uncertainties in J(S), relative humidity, and temperature measurements, to keep errors in G(S) estimates to less than 10%, estimates of G(S) should be limited to conditions in which D >/= 0.6 kPa.

  2. Compensation method for the alignment angle error of a gear axis in profile deviation measurement

    NASA Astrophysics Data System (ADS)

    Fang, Suping; Liu, Yongsheng; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryuhei

    2013-05-01

    In the precision measurement of involute helical gears, the alignment angle error of a gear axis, which was caused by the assembly error of a gear measuring machine, will affect the measurement accuracy of profile deviation. A model of the involute helical gear is established under the condition that the alignment angle error of the gear axis exists. Based on the measurement theory of profile deviation, without changing the initial measurement method and data process of the gear measuring machine, a compensation method is proposed for the alignment angle error of the gear axis that is included in profile deviation measurement results. Using this method, the alignment angle error of the gear axis can be compensated for precisely. Some experiments that compare the residual alignment angle error of a gear axis after compensation for the initial alignment angle error were performed to verify the accuracy and feasibility of this method. Experimental results show that the residual alignment angle error of a gear axis included in the profile deviation measurement results is decreased by more than 85% after compensation, and this compensation method significantly improves the measurement accuracy of the profile deviation of involute helical gear.

  3. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  4. Measure short separation for space debris based on radar angle error measurement information

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Wang, Qiao; Zhou, Lai-jian; Zhang, Zhuo; Li, Xiao-long

    2016-11-01

    With the increasingly frequent human activities in space, number of dead satellites and space debris has increased dramatically, bring greater risks to the available spacecraft, however, the current widespread use of measuring equipment between space target has a lot of problems, such as high development costs or the limited conditions of use. To solve this problem, use radar multi-target measure error information to the space, and combining the relationship between target and the radar station point of view, building horizontal distance decoding model. By adopting improved signal quantization digit, timing synchronization and outliers processing method, improve the measurement precision, satisfies the requirement of multi-objective near distance measurements, and the using efficiency is analyzed. By conducting the validation test, test the feasibility and effectiveness of the proposed methods.

  5. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  6. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    ERIC Educational Resources Information Center

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  7. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  8. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  9. Detecting bit-flip errors in a logical qubit using stabilizer measurements

    PubMed Central

    Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.

    2015-01-01

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318

  10. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  11. Significance of gauge line error in orifice measurement

    SciTech Connect

    Bowen, J.W.

    1995-12-01

    Pulsation induced gauge line amplification can cause errors in the recorded differential signal used to calculate flow. Its presence may be detected using dual transmitters (one connected at the orifice taps, the other at the end of the gauge lines) and comparing the relative peak to peak amplitudes. Its affect on recorded differential may be determined by averaging both signals with a PC based data acquisition and analysis system. Remedial action is recommended in all cases where amplification is detected. Use of close connect, full opening manifolds, is suggested to decouple the gauge lines` resonant frequency from that of the excitation`s, by positioning the recording device as close to the process signal`s origin as possible.

  12. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    SciTech Connect

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.

  13. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE PAGES

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; ...

    2015-02-23

    vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less

  14. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-01

    . By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 m s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.

  15. Compensation method for the alignment angle error in pitch deviation measurement

    NASA Astrophysics Data System (ADS)

    Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei

    2016-05-01

    When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.

  16. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  17. Note: Periodic error measurement in heterodyne interferometers using a subpicometer accuracy Fabry-Perot interferometer.

    PubMed

    Zhu, Minhao; Wei, Haoyun; Wu, Xuejian; Li, Yan

    2014-08-01

    Periodic error is the major problem that limits the accuracy of heterodyne interferometry. A traceable system for periodic error measurement is developed based on a nonlinearity free Fabry-Perot (F-P) interferometer. The displacement accuracy of the F-P interferometer is 0.49 pm at 80 ms averaging time, with the measurement results referenced to an optical frequency comb. Experimental comparison between the F-P interferometer and a commercial heterodyne interferometer is carried out and it shows that the first harmonic periodic error dominates in the commercial heterodyne interferometer with an error amplitude of 4.64 nm.

  18. Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea

    NASA Astrophysics Data System (ADS)

    Shin, S.; Kim, Y.; Jung, C.

    2010-12-01

    The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta

  19. Quantifying Error in Survey Measures of School and Classroom Environments

    ERIC Educational Resources Information Center

    Schweig, Jonathan David

    2014-01-01

    Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…

  20. Quantization Error Reduction in the Measurement of Fourier Intensity for Phase Retrieval

    NASA Astrophysics Data System (ADS)

    Yang, Shiyuan; Takajo, Hiroaki

    2004-08-01

    The quantization error in the measurement of Fourier intensity for phase retrieval is discussed and a multispectra method is proposed to reduce this error. The Fourier modulus used for phase retrieval is usually obtained by measuring Fourier intensity with a digital device. Therefore, quantization error in the measurement of Fourier intensity leads to an error in the reconstructed object when iterative Fourier transform algorithms are used. The multispectra method uses several Fourier intensity distributions for a number of measurement ranges to generate a Fourier intensity distribution with a low quantization error. Simulations show that the multispectra method is effective in retrieving objects with real or complex distributions when the iterative hybrid input-output algorithm (HIO) is used.

  1. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Niu, Qunjie; Liang, Kun

    2016-09-01

    Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

  2. Moment Adjusted Imputation for Multivariate Measurement Error Data with Applications to Logistic Regression

    PubMed Central

    Thomas, Laine; Stefanski, Leonard A.; Davidian, Marie

    2013-01-01

    In clinical studies, covariates are often measured with error due to biological fluctuations, device error and other sources. Summary statistics and regression models that are based on mismeasured data will differ from the corresponding analysis based on the “true” covariate. Statistical analysis can be adjusted for measurement error, however various methods exhibit a tradeo between convenience and performance. Moment Adjusted Imputation (MAI) is method for measurement error in a scalar latent variable that is easy to implement and performs well in a variety of settings. In practice, multiple covariates may be similarly influenced by biological fluctuastions, inducing correlated multivariate measurement error. The extension of MAI to the setting of multivariate latent variables involves unique challenges. Alternative strategies are described, including a computationally feasible option that is shown to perform well. PMID:24072947

  3. A semiparametric copula method for Cox models with covariate measurement error.

    PubMed

    Kim, Sehee; Li, Yi; Spiegelman, Donna

    2016-01-01

    We consider measurement error problem in the Cox model, where the underlying association between the true exposure and its surrogate is unknown, but can be estimated from a validation study. Under this framework, one can accommodate general distributional structures for the error-prone covariates, not restricted to a linear additive measurement error model or Gaussian measurement error. The proposed copula-based approach enables us to fit flexible measurement error models, and to be applicable with an internal or external validation study. Large sample properties are derived and finite sample properties are investigated through extensive simulation studies. The methods are applied to a study of physical activity in relation to breast cancer mortality in the Nurses' Health Study.

  4. Microprocessor instruments for measuring nonlinear distortions; algorithms for digital processing of the measurement signal and an estimate of the errors

    SciTech Connect

    Mints, M.Ya.; Chinkov, V.N.

    1995-09-01

    Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.

  5. The effect of proficiency level on measurement error of range of motion

    PubMed Central

    Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari

    2016-01-01

    [Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712

  6. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  7. Errors and uncertainties in the measurement of ultrasonic wave attenuation and phase velocity.

    PubMed

    Kalashnikov, Alexander N; Challis, Richard E

    2005-10-01

    This paper presents an analysis of the error generation mechanisms that affect the accuracy of measurements of ultrasonic wave attenuation coefficient and phase velocity as functions of frequency. In the first stage of the analysis we show that electronic system noise, expressed in the frequency domain, maps into errors in the attenuation and the phase velocity spectra in a highly nonlinear way; the condition for minimum error is when the total measured attenuation is around 1 Neper. The maximum measurable total attenuation has a practical limit of around 6 Nepers and the minimum measurable value is around 0.1 Neper. In the second part of the paper we consider electronic noise as the primary source of measurement error; errors in attenuation result from additive noise whereas errors in phase velocity result from both additive noise and system timing jitter. Quantization noise can be neglected if the amplitude of the additive noise is comparable with the quantization step, and coherent averaging is employed. Experimental results are presented which confirm the relationship between electronic noise and measurement errors. The analytical technique is applicable to the design of ultrasonic spectrometers, formal assessment of the accuracy of ultrasonic measurements, and the optimization of signal processing procedures to achieve a specified accuracy.

  8. Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements

    NASA Technical Reports Server (NTRS)

    Buehrle, R. D.; Young, C. P., Jr.

    1995-01-01

    This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.

  9. Power and sample size calculations for generalized regression models with covariate measurement error.

    PubMed

    Tosteson, Tor D; Buzas, Jeffrey S; Demidenko, Eugene; Karagas, Margaret

    2003-04-15

    Covariate measurement error is often a feature of scientific data used for regression modelling. The consequences of such errors include a loss of power of tests of significance for the regression parameters corresponding to the true covariates. Power and sample size calculations that ignore covariate measurement error tend to overestimate power and underestimate the actual sample size required to achieve a desired power. In this paper we derive a novel measurement error corrected power function for generalized linear models using a generalized score test based on quasi-likelihood methods. Our power function is flexible in that it is adaptable to designs with a discrete or continuous scalar covariate (exposure) that can be measured with or without error, allows for additional confounding variables and applies to a broad class of generalized regression and measurement error models. A program is described that provides sample size or power for a continuous exposure with a normal measurement error model and a single normal confounder variable in logistic regression. We demonstrate the improved properties of our power calculations with simulations and numerical studies. An example is given from an ongoing study of cancer and exposure to arsenic as measured by toenail concentrations and tap water samples.

  10. Ambient Temperature Changes and the Impact to Time Measurement Error

    NASA Astrophysics Data System (ADS)

    Ogrizovic, V.; Gucevic, J.; Delcev, S.

    2012-12-01

    Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.

  11. Mean-square error due to gradiometer field measuring devices.

    PubMed

    Hatsell, C P

    1991-06-01

    Gradiometers use spatial common mode magnetic field rejection to reduce interference from distant sources. They also introduce distortion that can be severe, rendering experimental data difficult to interpret. Attempts to recover the measured magnetic field from the gradiometer output will be plagued by the nonexistence of a spatial function for deconvolution (except for first-order gradiometers), and by the high-pass nature of the spatial transform that emphasizes high spatial frequency noise. Goals of a design for a facility for measuring biomagnetic fields should be an effective shielded room and a field detector employing a first-order gradiometer.

  12. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  13. Defining uncertainty and error in planktic foraminiferal oxygen isotope measurements

    NASA Astrophysics Data System (ADS)

    Fraass, A. J.; Lowery, C. M.

    2017-02-01

    Foraminifera are the backbone of paleoceanography. Planktic foraminifera are one of the leading tools for reconstructing water column structure. However, there are unconstrained variables when dealing with uncertainty in the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate uncertainty in oxygen isotope measurements. FIRM uses parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects to produce synthetic isotope data in a manner reflecting natural processes. Reproducibility is then tested using Monte Carlo simulations. Importantly, this is not an attempt to fully model the entire complicated process of foraminiferal calcification; instead, we are trying to include only enough parameters to estimate the uncertainty in foraminiferal δ18O records. Two well-constrained empirical data sets are simulated successfully, demonstrating the validity of our model. The results from a series of experiments with the model show that reproducibility is not only largely controlled by the number of individuals in each measurement but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. FIRM is a tool to estimate isotopic uncertainty values and to explore the impact of myriad factors on the fidelity of paleoceanographic records, particularly for the Holocene.

  14. Measurement Error Adjustment Using the SIMEX Method: An Application to Student Growth Percentiles

    ERIC Educational Resources Information Center

    Shang, Yi

    2012-01-01

    Growth models are used extensively in the context of educational accountability to evaluate student-, class-, and school-level growth. However, when error-prone test scores are used as independent variables or right-hand-side controls, the estimation of such growth models can be substantially biased. This article introduces a…

  15. Position error correction in absolute surface measurement based on a multi-angle averaging method

    NASA Astrophysics Data System (ADS)

    Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin

    2017-04-01

    We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.

  16. Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith

    2013-09-01

    Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.

  17. American College Students' Attitudes toward Institutional Accountability Testing: Developing Measures

    ERIC Educational Resources Information Center

    Zilberberg, Anna; Anderson, Robin D.; Finney, Sara J.; Marsh, Kimberly R.

    2013-01-01

    In the United States, government mandates for educational accountability have prompted an increase in testing in K-12 and higher education settings, resulting in a generation of millennial students who have undergone repeated assessment over the course of their educational careers. The cumulative effects of testing may range from increasingly…

  18. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    NASA Astrophysics Data System (ADS)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  19. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  20. Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements

    NASA Astrophysics Data System (ADS)

    Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.

    2012-12-01

    This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.

  1. Measurement error associated with surveys of fish abundance in Lake Michigan

    USGS Publications Warehouse

    Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.

    2002-01-01

    In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.

  2. Error Measurements in an Acousto-Optic Tunable Filter Fiber Bragg Grating Sensor System

    DTIC Science & Technology

    1994-05-01

    Acousto - Optic Tunable Filter--Fiber Bragg Grating (AOTF-FBG) system. This analysis was targeted to investigate the measurement error in the AOTF-FBG system...Fiber bragg grating, Wavelength division multiplexing, Acousto - optic tunable filter.

  3. Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy

    NASA Technical Reports Server (NTRS)

    Hoenk, M. E.

    1994-01-01

    Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.

  4. Violation of Heisenberg's error-disturbance uncertainty relation in neutron-spin measurements

    NASA Astrophysics Data System (ADS)

    Sulyok, Georg; Sponar, Stephan; Erhart, Jacqueline; Badurek, Gerald; Ozawa, Masanao; Hasegawa, Yuji

    2013-08-01

    In its original formulation, Heisenberg's uncertainty principle dealt with the relationship between the error of a quantum measurement and the thereby induced disturbance on the measured object. Meanwhile, Heisenberg's heuristic arguments have turned out to be correct only for special cases. An alternative universally valid relation was derived by Ozawa in 2003. Here, we demonstrate that Ozawa's predictions hold for projective neutron-spin measurements. The experimental inaccessibility of error and disturbance claimed elsewhere has been overcome using a tomographic method. By a systematic variation of experimental parameters in the entire configuration space, the physical behavior of error and disturbance for projective spin-(1)/(2) measurements is illustrated comprehensively. The violation of Heisenberg's original relation, as well as the validity of Ozawa's relation become manifest. In addition, our results conclude that the widespread assumption of a reciprocal relation between error and disturbance is not valid in general.

  5. Image pre-filtering for measurement error reduction in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  6. On the errors in measuring the particle density by the light absorption method

    SciTech Connect

    Ochkin, V. N.

    2015-04-15

    The accuracy of absorption measurements of the density of particles in a given quantum state as a function of the light absorption coefficient is analyzed. Errors caused by the finite accuracy in measuring the intensity of the light passing through a medium in the presence of different types of noise in the recorded signal are considered. Optimal values of the absorption coefficient and the factors capable of multiplying errors when deviating from these values are determined.

  7. Phase-modulation method for AWG phase-error measurement in the frequency domain.

    PubMed

    Takada, Kazumasa; Hirose, Tomohiro

    2009-12-15

    We report a phase-modulation method for measuring arrayed waveguide grating (AWG) phase error in the frequency domain. By combining the method with a digital sampling technique that we have already reported, we can measure the phase error within an accuracy of +/-0.055 rad for the center 90% waveguides in the array even when no carrier frequencies are generated in the beat signal from the interferometer.

  8. Comparison of transmission error predictions with noise measurements for several spur and helical gears

    NASA Astrophysics Data System (ADS)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-06-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  9. Comparison of Transmission Error Predictions with Noise Measurements for Several Spur and Helical Gears

    NASA Technical Reports Server (NTRS)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-01-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  10. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  11. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  12. Functional and Structural Methods with Mixed Measurement Error and Misclassification in Covariates.

    PubMed

    Yi, Grace Y; Ma, Yanyuan; Spiegelman, Donna; Carroll, Raymond J

    2015-06-01

    Covariate measurement imprecision or errors arise frequently in many areas. It is well known that ignoring such errors can substantially degrade the quality of inference or even yield erroneous results. Although in practice both covariates subject to measurement error and covariates subject to misclassification can occur, research attention in the literature has mainly focused on addressing either one of these problems separately. To fill this gap, we develop estimation and inference methods that accommodate both characteristics simultaneously. Specifically, we consider measurement error and misclassification in generalized linear models under the scenario that an external validation study is available, and systematically develop a number of effective functional and structural methods. Our methods can be applied to different situations to meet various objectives.

  13. Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search.

    PubMed

    Biggs, Adam T

    2017-03-28

    Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.

  14. Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.

    PubMed

    Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R

    2015-01-02

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications.

  15. Statistical and systematic errors in redshift-space distortion measurements from large surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

    2012-12-01

    We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

  16. The estimation error covariance matrix for the ideal state reconstructor with measurement noise

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1988-01-01

    A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

  17. Experimental Test of Error-Disturbance Uncertainty Relations by Weak Measurement

    NASA Astrophysics Data System (ADS)

    Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi

    2014-01-01

    We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.

  18. [Instrumentation for blood pressure measurements: historical aspects, concepts and sources of error].

    PubMed

    de Araujo, T L; Arcuri, E A; Martins, E

    1998-04-01

    According to the International Council of Nurses the measurement of blood pressure is the procedure most performed by nurses in all the world. The aim of this study is to analyse the polemical aspects of instruments used in blood pressure measurement. Considering the analyses of the literature and the American Heart Association Recommendations, the main source of errors when measuring blood pressure are discussed.

  19. How reproducibly can human ear ossicles be measured? A study of inter-observer error.

    PubMed

    Flohr, Stefan; Leckelt, Jasmin; Kierdorf, Uwe; Kierdorf, Horst

    2010-12-01

    Ear ossicles have thus far received little attention in biological anthropology. For the use of these bones as a source of biological information, it is important to know how reproducibly they can be measured. We determined inter-observer errors for measurements recorded by two observers on mallei (N = 119) and incudes (N = 124) obtained from human skeletons recovered from an early medieval cemetery in southern Germany. Measurements were taken on-screen on images of the bones obtained with a digital microscope. In the case of separately acquired images, mean inter-observer error ranged between 0.50 and 9.59% (average: 2.63%) for malleus measurements and between 0.67 and 7.11% (average: 2.01%) for incus measurements. Coefficients of reliability ranged between 0.72 and 0.99 for the malleus measurements and between 0.61 and 0.98 for those of the incus. Except for one incus measurement, readings performed by the two observers on the same set of photographs produced lower inter-observer errors and higher coefficients of reliability than the method involving separate acquisition of images by the observers. Across all linear measurements, absolute inter-observer error was independent of the mean size of the measured variable for both bones. So far, studies on human ear ossicles have largely neglected the issue of measurement error and its potential implication for the interpretation of the data. Knowledge of measurement error is of special importance if results obtained by different researchers are combined into a single database. It is, therefore, suggested that the reproducibility of measurements should be addressed in all future studies of ear ossicles.

  20. SOME DEVELOPMENTS IN MANAGEMENT SCIENCE AND INFORMATION SYSTEMS WITH RESPECT TO MEASUREMENT IN ACCOUNTING.

    DTIC Science & Technology

    information systems are discussed in the context of accounting measurement. Data requirements for implementation of the new planning and control techniques are considered and compared with data furnished by accounting reports. Input data and aggregation in contemporary information systems are compared with recording and classification in conventional accounting systems. It is proposed that accounting measurement principles be developed for data in ’micro’ units, much smaller than the transaction, which serve as data inputs in on-line

  1. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  2. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  3. Measuring radiation induced changes in the error rate of fiber optic data links

    NASA Astrophysics Data System (ADS)

    Decusatis, Casimer; Benedict, Mel

    1996-12-01

    The purpose of this work is to investigate the effects of ionizing (gamma) radiation exposure on the bit error rate (BER) of an optical fiber data communication link. While it is known that exposure to high radiation dose rates will darken optical fiber permanently, comparatively little work has been done to evaluate modern dose rates. The resulting increase in fiber attenuation over time represents an additional penalty in the link optical power budget, which can degrade the BER if it is not accounted for in the link design. Modeling the link to predict this penalty is difficult, and it requires detailed information about the fiber composition that may not be available to the link designer. We describe a laboratory method for evaluating the effects of moderate dose rates on both single-mode and multimode fiber. Once a sample of fiber has been measured, the data can be fit to a simple model for predicting (at least to first order) BER as a function of radiation dose for fibers of similar composition.

  4. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  5. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  6. Measurement of Turbulence with Acoustic Doppler Current Profilers - Sources of Error and Laboratory Results

    USGS Publications Warehouse

    Nystrom, E.A.; Oberg, K.A.; Rehmann, C.R.; ,

    2002-01-01

    Acoustic Doppler current profilers (ADCPs) provide a promising method for measuring surface-water turbulence because they can provide data from a large spatial range in a relatively short time with relative ease. Some potential sources of errors in turbulence measurements made with ADCPs include inaccuracy of Doppler-shift measurements, poor temporal and spatial measurement resolution, and inaccuracy of multi-dimensional velocities resolved from one-dimensional velocities measured at separate locations. Results from laboratory measurements of mean velocity and turbulence statistics made with two pulse-coherent ADCPs in 0.87 meters of water are used to illustrate several of inherent sources of error in ADCP turbulence measurements. Results show that processing algorithms and beam configurations have important effects on turbulence measurements. ADCPs can provide reasonable estimates of many turbulence parameters; however, the accuracy of turbulence measurements made with commercially available ADCPs is often poor in comparison to standard measurement techniques.

  7. Task committee on experimental uncertainty and measurement errors in hydraulic engineering: An update

    USGS Publications Warehouse

    Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.

    2005-01-01

    As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.

  8. Measuring Scale Errors in a Laser Tracker's Horizontal Angle Encoder Through Simple Length Measurement and Two-Face System Tests.

    PubMed

    Muralikrishnan, B; Blackburn, C; Sawyer, D; Phillips, S; Bridges, R

    2010-01-01

    We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder's error map to improve the tracker's angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error.

  9. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  10. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    NASA Astrophysics Data System (ADS)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang; Hwang, Ching-Shiang

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  11. Error analysis of angular resolution for direct intercepting measurement laser warning equipment

    NASA Astrophysics Data System (ADS)

    Che, Jinxi; Zhang, Jinchun; Wang, Hongjun; Cheng, Bin

    2016-11-01

    The accurate warning and reconnaissance to incoming laser signal is the presupposition of electro-optical jamming. However, the error of angular resolution of laser warning equipment directly affects the accuracy of warning. In this paper, the working mechanism of direct intercepting measurement laser warning equipment was analyzed. Then, the structure of its detector array system and the causes of error of angular resolution were analyzed. At different distance, the resolution errors of laser warning equipment with different detecting unit were calculated. The conclusion has some reference value to test and detect of such equipment.

  12. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  13. Error reduction by combining strapdown inertial measurement units in a baseball stitch

    NASA Astrophysics Data System (ADS)

    Tracy, Leah

    A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.

  14. Effect of patient positions on measurement errors of the knee-joint space on radiographs

    NASA Astrophysics Data System (ADS)

    Gilewska, Grazyna

    2001-08-01

    Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.

  15. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information.

    PubMed

    Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S

    2016-02-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  16. Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s.

    PubMed

    Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A

    2004-10-01

    Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M(1)) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6-4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database.

  17. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  18. Determination of error measurement by means of the basic magnetization curve

    NASA Astrophysics Data System (ADS)

    Lankin, M. V.; Lankin, A. M.

    2016-04-01

    The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.

  19. Educational Assessment: Tests and Measurements in the Age of Accountability

    ERIC Educational Resources Information Center

    Wright, Robert J.

    2007-01-01

    Grounded in the real world of public schools and students, this engaging, insightful, and highly readable text introduces the inner-workings of K-12 educational assessment. It covers traditional topics in an approachable and understandable way; analyzes and interprets "hot-button" issues of today's complex measurement concerns; relates…

  20. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  1. Development of AN Optical Measuring System for Geometric Errors of a Miniaturized Machine Tool

    NASA Astrophysics Data System (ADS)

    Kweon, Sung-Hwan; Liu, Yu; Lee, Jae-Ha; Kim, Young-Suk; Yang, Seung-Han

    Recently, miniaturized machine tools (mMT) have become a promising micro/meso-mechanical manufacturing technique to overcome the material limitation and produce complex 3D meso-scale components with higher accuracy. To achieve sub-micron accuracy, geometric errors of a miniaturized machine tool should be identified and compensated. An optic multi-degree-of-freedom (DOF) measuring system, composed of one laser diode, two beam splitters and three position sensing detectors (PSDs), is proposed for simultaneous measurement of horizontal straightness, vertical straightness, pitch, yaw and roll errors along a moving axis of mMT. Homogeneous transformation matrix (HTM) is used to derive the relationship between the readings of PSDs and geometric errors, and an error estimation algorithm is presented to calculate the geometric errors. Simulation is carried out to prove the estimation accuracy of this algorithm. In theory, the measurement resolution of this proposed system can reach up to 0.03 μm and 0.06 arcsec for translational and rotational errors, respectively.

  2. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  3. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  4. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  5. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  6. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    PubMed Central

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  7. Reduction of positional errors in a four-point probe resistance measurement

    NASA Astrophysics Data System (ADS)

    Worledge, D. C.

    2004-03-01

    A method for reducing resistance errors due to inaccuracy in the positions of the probes in a collinear four-point probe resistance measurement of a thin film is presented. By using a linear combination of two measurements which differ by interchange of the I- and V- leads, positional errors can be eliminated to first order. Experimental data measured using microprobes show a substantial reduction in absolute error from 3.4% down to 0.01%-0.1%, and an improvement in precision by a factor of 2-4. The application of this technique to the current-in-plane tunneling method to measure electrical properties of unpatterned magnetic tunnel junction wafers is discussed.

  8. [Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].

    PubMed

    Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong

    2015-11-01

    A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.

  9. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  10. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major

  11. 50 CFR 640.28 - Annual catch limits (ACLs) and accountability measures (AMs).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE SPINY LOBSTER FISHERY OF THE GULF... accountability measures (AMs). For recreational and commercial spiny lobster landings combined, the ACL is...

  12. Measurement of centering error for probe of swing arm profilometer using a spectral confocal sensor

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Jing, Hongwei; Wei, Zhongwei; Cao, Xuedong

    2015-02-01

    A spectral confocal sensor was used to measure the centering error for probe of swing arm profilometer (SAP). The feasibility of this technology was proved through simulation and experiment. The final measurement results was also analyzed to evaluate the advantages and disadvantages of this technology.

  13. Prevention validation and accounting platform: a framework for establishing accountability and performance measures of substance abuse prevention programs.

    PubMed

    Kim, S; McLeod, J H; Williams, C; Hepler, N

    2000-01-01

    The field of substance abuse prevention has neither an overarching conceptual framework nor a set of shared terminologies for establishing the accountability and performance outcome measures of substance abuse prevention services rendered. Hence, there is a wide gap between what we currently have as data on one hand and information that are required to meet the performance goals and accountability measures set by the Government Performance and Results Act of 1993 on the other. The task before us is: How can we establish the accountability and performance measures of substance abuse prevention programs and transform the field of prevention into prevention science? The intent of this volume is to serve that purpose and accelerate the processes of this transformation by identifying the requisite components of the transformation (i.e., theory, methodology, convention on terms, and data) and by introducing an open forum called, Prevention Validation and Accounting (PREVA) Platform. The entire PREVA Platform (for short, the Platform) is designed as an analytic framework, which is formulated by a collectivity of common concepts, terminologies, accounting units, protocols for counting the units, data elements, and operationalizations of various constructs, and other summary measures intended to bring about an efficient and effective measurement of process input, program capacity, process output, performance outcome, and societal impact of substance abuse prevention programs. The measurement units and summary data elements are designed to be measured across time and across jurisdictions, i.e., from local to regional to state to national levels. In the Platform, the process input is captured by two dimensions of time and capital. Time is conceptualized in terms of service delivery time and time spent for research and development. Capital is measured by the monies expended for the delivery of program activities during a fiscal or reporting period. Program capacity is captured

  14. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  15. Previous estimates of mitochondrial DNA mutation level variance did not account for sampling error: comparing the mtDNA genetic bottleneck in mice and humans.

    PubMed

    Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C

    2010-04-09

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.

  16. Systematic Errors in the Measurement of Emissivity Caused by Directional Effects

    NASA Astrophysics Data System (ADS)

    Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan

    2003-04-01

    Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use of the infrared 8 -14- μm band. This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.

  17. Theoretical estimation of systematic errors in local deformation measurements using digital image correlation

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohai; Su, Yong; Zhang, Qingchuan

    2017-01-01

    The measurement accuracy using the digital image correlation (DIC) method in local deformations such as the Portevin-Le Chatelier bands, the deformations near the gap, and the crack tips has raised a major concern. The measured displacement and strain results are heavily affected by the calculation parameters (such as the subset size, the grid step, and the strain window size) due to under-matched shape functions (for displacement measurement) and surface fitting functions (for strain calculation). To evaluate the systematic errors in local deformations, theoretical estimations and approximations of displacement and strain systematic errors have been deduced when the first-order shape functions and quadric surface fitting functions are employed. The following results come out: (1) the approximate displacement systematic errors are proportional to the second-order displacement gradients and the ratio is only determined by the subset size; (2) the approximate strain systematic errors are functions of the third-order displacement gradients and the coefficients are dependent on the subset size, the grid step and the strain window size. Simulated experiments have been carried out to verify the reliability. Besides, a convenient way by comparing displacement results measured by the DIC method with different subset sizes is proposed to approximately evaluate the displacement systematic errors.

  18. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

  19. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines.

    PubMed

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D; Szpiro, Adam A

    2016-11-01

    Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals.

  20. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    PubMed

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  1. Study on position error of fiber positioning measurement system for LAMOST

    NASA Astrophysics Data System (ADS)

    Jin, Yi; Zhai, Chao; Xing, Xiaozheng; Teng, Yong; Hu, Hongzhuan

    2006-06-01

    An investigation on measuring precision of the measurement system is carried on, which is applied to optical fiber positioning system for LAMOST. In the fiber positioning system, geometrical coordinates of fibers need to be measured in order to verify the precision of fiber positioning and it is one of the most pivotal problems. The measurement system consists of an area CCD sensor, an image acquisition card, a lens and a computer. Temperature, vibration, lens aberration and CCD itself will probably cause measuring error. As fiber positioning is a dynamic process and fibers are reversing, this will make additional error. The paper focuses on analyzing the influence to measuring precision which is made by different status of fibers. The fibers are stuck to keep the relative positions steady which can rotate around the same point. The distances between fibers are measured under different experimental conditions. Then the influence of fibers' status can be obtained from the change of distances. Influence to position error made by different factors is analyzed according to the theory and experiments. Position error would be decreased by changing a lens aperture setting and polishing fibers.

  2. Correction of error in two-dimensional wear measurements of cemented hip arthroplasties.

    PubMed

    The, Bertram; Mol, Linda; Diercks, Ron L; van Ooijen, Peter M A; Verdonschot, Nico

    2006-01-01

    The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We developed and tested a method to correct for this source of error. The influence of patient position on validity of wear measurements was investigated with controlled manipulation of a cadaveric pelvis. Without correction, the error exceeded 0.2 mm if differences in cup projection were as small as 5 degrees. When using the described correction method, cup positioning differences could be greater than 20 degrees before introducing an error exceeding 0.2 mm. For followup of patients in clinical practice, we recommend using the correction method to enhance accuracy of the results.

  3. Sideslip-induced static pressure errors in flight-test measurements

    NASA Technical Reports Server (NTRS)

    Parks, Edwin K.; Bach, Ralph E., Jr.; Tran, Duc

    1990-01-01

    During lateral flight-test maneuvers of a V/STOL research aircraft, large errors in static pressure were observed. An investigation of the data showed a strong correlation of the pressure record with variations in sideslip angle. The sensors for both measurements were located on a standard air-data nose boom. An algorithm based on potential flow over a cylinder that was developed to correct the pressure record for sideslip-induced errors is described. In order to properly apply the correction algorithm, it was necessary to estimate and correct the lag error in the pressure system. The method developed for estimating pressure lag is based on the coupling of sideslip activity into the static ports and can be used as a standard flight-test procedure. The estimation procedure is discussed and the corrected static-pressure record for a typical lateral maneuver is presented. It is shown that application of the correction algorithm effectively attenuates sideslip-induced errors.

  4. Error correction with machine learning: one man's syndrome measurement is another man's treasure

    NASA Astrophysics Data System (ADS)

    Combes, Joshua; Briegel, Hans; Caves, Carlton; Cesare, Christopher; Ferrie, Christopher; Milburn, Gerard; Tiersch, Markus

    2014-03-01

    Syndrome measurements that are made in quantum error correction contains more information than is typically used. We show using the data from syndrome measurements (that one has to do anyway) the following: (1) a channel can be dynamically estimated; (2) in some situations the information gathered from the estimation can be used to permanently correct away part of the channel; and (3) can allow us to perform hypothesis testing to determine if the errors are correlated or if the error rate exceeds the ``expected worst case''. The unifying theme to these topics is making use of all of the information in the data collected from syndrome measurements with a machine learning and control algorithms.

  5. Differential correction technique for removing common errors in gas filter radiometer measurements

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Chan, Caroline C.; Gormsen, Barbara B.; Reichle, Henry G., Jr.

    1992-01-01

    The Measurement of Air Pollution from Satellites (MAPS) gas filter radiometer experiment was designed to measure CO mixing ratios in the earth's atmosphere. MAPS also measures N2O to provide a reference channel for the atmospheric emitting temperature and to detect the presence of clouds. In this paper we formulate equations to correct the radiometric signals based on the spatial and temporal uniformity of the N2O mixing ratio in the atmosphere. Results of an error study demonstrate that these equations reduce the error in inferred CO mixing ratios. Subsequent application of the technique to the MAPS 1984 data set decreases the error in the frequency distribution of mixing ratios and increases the number of usable data points.

  6. An Empirical Study for Impacts of Measurement Errors on EHR based Association Studies

    PubMed Central

    Duan, Rui; Cao, Ming; Wu, Yonghui; Huang, Jing; Denny, Joshua C; Xu, Hua; Chen, Yong

    2016-01-01

    Over the last decade, Electronic Health Records (EHR) systems have been increasingly implemented at US hospitals. Despite their great potential, the complex and uneven nature of clinical documentation and data quality brings additional challenges for analyzing EHR data. A critical challenge is the information bias due to the measurement errors in outcome and covariates. We conducted empirical studies to quantify the impacts of the information bias on association study. Specifically, we designed our simulation studies based on the characteristics of the Electronic Medical Records and Genomics (eMERGE) Network. Through simulation studies, we quantified the loss of power due to misclassifications in case ascertainment and measurement errors in covariate status extraction, with respect to different levels of misclassification rates, disease prevalence, and covariate frequencies. These empirical findings can inform investigators for better understanding of the potential power loss due to misclassification and measurement errors under a variety of conditions in EHR based association studies. PMID:28269935

  7. Linear Increments with Non-monotone Missing Data and Measurement Error.

    PubMed

    Seaman, Shaun R; Farewell, Daniel; White, Ian R

    2016-12-01

    Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non-monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non-monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non-MAR and non-normal) assumptions not much stronger than those of LI. Moreover, when missingness is non-monotone, they are typically more efficient.

  8. Measurement error models in chemical mass balance analysis of air quality data

    NASA Astrophysics Data System (ADS)

    Christensen, William F.; Gunst, Richard F.

    The chemical mass balance (CMB) equations have been used to apportion observed pollutant concentrations to their various pollution sources. Typical analyses incorporate estimated pollution source profiles, estimated source profile error variances, and error variances associated with the ambient measurement process. Often the CMB model is fit to the data using an iteratively re-weighted least-squares algorithm to obtain the effective variance solution. We consider the chemical mass balance model within the framework of the statistical measurement error model (e.g., Fuller, W.A., Measurement Error Models, Wiley, NewYork, 1987), and we illustrate that the models assumed by each of the approaches to the CMB equations are in fact special cases of a general measurement error model. We compare alternative source contribution estimators with the commonly used effective variance estimator when standard assumptions are valid and when such assumptions are violated. Four approaches for source contribution estimation and inference are compared using computer simulation: weighted least squares (with standard errors adjusted for source profile error), the effective variance approach of Watson et al. (Atmos, Environ., 18, 1984, 1347), the Britt and Luecke (Technometrics, 15, 1973, 233) approach, and a method of moments approach given in Fuller (1987, p. 193). For the scenarios we consider, the simplistic weighted least-squares approach performs as well as the more widely used effective variance solution in most cases, and is slightly superior to the effective variance solution when source profile variability is large. The four estimation approaches are illustrated using real PM 2.5 data from Fresno and the conclusions drawn from the computer simulation are validated.

  9. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  10. Importance of regression processes in evaluating analytical errors in argon isotope measurements

    NASA Astrophysics Data System (ADS)

    Min, K.; Powell, L.

    2003-04-01

    For 40Ar/39Ar dating, it is required to measure five argon isotopes of 36Ar ~ 40Ar with high precision. The process involves isolating the purified gas in an analytical volume and cyclically measuring the abundance of each Ar isotope using an electron multiplier to minimize detector calibration and sensitivity errors. Each cycle is composed of maximum several tens of fundamental digital voltmeter (DVM) readings per isotope. Since the abundance of each isotope varies over analytical time, it is necessary to statistically treat the data to obtain most probable estimates. The readings on one mass from one cycle are commonly averaged to be treated as a single data point for regression. The y-intercept derived from the regression is assumed to represent an initial isotopic abundance at the time (t0) when the gas was introduced to the analytical volume. This procedure is repeated for each Ar isotope. About 0.2 % precision is often claimed for 40Ar and 39Ar measurements for properly irradiated, K-rich samples. The uncertainty of the calculated y-intercept varies depending on the distribution of the averaged DVM readings as well as the model equation used in regression. The “internal error” associated with the distribution of individual DVM readings in the group average are, however, commonly ignored in the regression procedure probably due to complex weighting processes. Including the internal error may significantly increase the uncertainties of 40Ar/39Ar ages especially for young samples because the analytical errors (from isotopic ratio measurements) are more dominant than the systematic errors (from decay constant, age of neutron flux monitor, etc). Alternative way to include the internal error is to regress all of the DVM readings with a single equation, then propagate the regression error into y-intercept calculation. In any case, it is necessary to propagate uncertainties derived from fundamental readings to properly estimate analytical errors in 40Ar/39Ar age

  11. Theoretical computation of trace gases retrieval random error from measurements of high spectral resolution infrared sounder

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.; Theriault, J. M.

    1991-01-01

    The purpose of this paper is to demonstrate the trace gas profiling capabilities of future passive high spectral resolution (1 cm(exp -1) or better) infrared (600 to 2700 cm(exp -1)) satellite tropospheric sounders. These sounders, such as the grating spectrometer, Atmospheric InfRared Sounders (AIRS) (Chahine et al., 1990) and the interferometer, GOES High Resolution Interferometer Sounder (GHIS), (Smith et al., 1991) can provide these unique infrared spectra which enable us to conduct this analysis. In this calculation only the total random retrieval error component is presented. The systematic error components contributed by the forward and inverse model error are not considered (subject of further studies). The total random errors, which are composed of null space error (vertical resolution component error) and measurement error (instrument noise component error), are computed by assuming one wavenumber spectral resolution with wavenumber span from 1100 cm(exp -1) to 2300 cm(exp -1) (the band 600 cm(exp -1) to 1100 cm(exp -1) is not used since there is no major absorption of our three gases here) and measurement noise of 0.25 degree at reference temperature of 260 degree K. Temperature, water vapor, ozone and mixing ratio profiles of nitrous oxide, carbon monoxide and methane are taken from 1976 US Standard Atmosphere conditions (a FASCODE model). Covariance matrices of the gases are 'subjectively' generated by assuming 50 percent standard deviation of gaussian perturbation with respect to their US Standard model profiles. Minimum information and maximum likelihood retrieval solutions are used.

  12. Minimizing systematic errors in phytoplankton pigment concentration derived from satellite ocean color measurements

    SciTech Connect

    Martin, D.L.

    1992-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.

  13. Measurement Errors in Microbial Water Quality Assessement: the Case of Bacterial Aggregates

    NASA Astrophysics Data System (ADS)

    Plancherel, Y.; Cowen, J. P.

    2004-12-01

    The quantification of the risk of illness for swimmers, bathers, or consumers exposed to a polluted water body involves the measurement of microbial indicator organism densities. Depending on the organism targeted, there exist two widely used (traditional) techniques for their enumeration: most probable number (MPN) and membrane filtration (MF). Estimation of indicator organism density by these traditional methods is subject to large measurement error, which translates into poorly constrained relationships between indicator organism density and illness rate. Neither the MPN nor the MF method can discriminate multiple cells that form an aggregate. Mathematical formulations and computer simulations are used to investigate the effects that bacterial clumps have on the measurement error of the concentrations. The first case considered is that of the formation of clusters induced during the membrane filtration process assuming a randomly distributed population of cells growing into colonies. The computer simulations indicate that this process induces a typical measurement error <15% with the MF method. Replication of the MF measurements does not reduce this type of error. The second case describes a mathematical framework for the modeling of particle-associated bacteria. When aggregates harboring bacteria are present in a sample, an additional measurement error of 5-35% is expected. Empirical results from laboratory and field experiments enumerating aggregated bacteria using the MF method agree well with these model values. Furthermore, the data reveal that this type of error depends on the microbial indicators used (Enterococcus, C. perfringens, Heterotrophic Plate Count bacteria) and highlights the importance of small bacterial clusters (<5 μ m).

  14. On the impact of covariate measurement error on spatial regression modelling

    PubMed Central

    Huque, Md Hamidul; Bondell, Howard; Ryan, Louise

    2015-01-01

    Summary Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD). PMID:25729267

  15. Quantitative analyses of spectral measurement error based on Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin

    2015-03-01

    The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.

  16. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    PubMed

    Liu, Shi Qiang; Zhu, Rong

    2016-01-29

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.

  17. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  18. Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals

    DTIC Science & Technology

    2014-08-01

    Exponential decay of reconstruction error from binary measurements of sparse signals Richard Baraniukr, Simon Foucartg, Deanna Needellc, Yaniv Planb...Church Street, Ann Arbor, MI 48109, USA. Email: wootters@umich.edu August 1, 2014 Abstract Binary measurements arise naturally in a variety of...greatly improve the ability to reconstruct a signal from binary measurements. This is exemplified by one- bit compressed sensing, which takes the

  19. 50 CFR 648.73 - Surfclam and ocean quahog Accountability Measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Surfclam and ocean quahog Accountability... Management Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.73 Surfclam and ocean quahog Accountability Measures. (a) Commercial ITQ fishery. (1) If the ACL for surfclam or ocean quahog is exceeded,...

  20. 50 CFR 648.73 - Surfclam and ocean quahog Accountability Measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Surfclam and ocean quahog Accountability... Management Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.73 Surfclam and ocean quahog Accountability Measures. (a) Commercial ITQ fishery. (1) If the ACL for surfclam or ocean quahog is exceeded,...

  1. 50 CFR 648.73 - Surfclam and ocean quahog Accountability Measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Surfclam and ocean quahog Accountability... Management Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.73 Surfclam and ocean quahog Accountability Measures. (a) Commercial ITQ fishery. (1) If the ACL for surfclam or ocean quahog is exceeded,...

  2. 48 CFR 9904.412 - Cost accounting standard for composition and measurement of pension cost.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Cost accounting standard for composition and measurement of pension cost. 9904.412 Section 9904.412 Federal Acquisition... accounting standard for composition and measurement of pension cost....

  3. Higher Education Counts: Accountability Measures for the New Millennium. 2005 Report

    ERIC Educational Resources Information Center

    Connecticut Department of Higher Education (NJ1), 2005

    2005-01-01

    "Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The…

  4. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  5. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  6. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    PubMed

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  7. Observation of spectrum effect on the measurement of intrinsic error field on EAST

    NASA Astrophysics Data System (ADS)

    Wang, Hui-Hui; Sun, You-Wen; Qian, Jin-Ping; Shi, Tong-Hui; Shen, Biao; Gu, Shuai; Liu, Yue-Qiang; Guo, Wen-Feng; Chu, Nan; He, Kai-Yang; Jia, Man-Ni; Chen, Da-Long; Xue, Min-Min; Ren, Jie; Wang, Yong; Sheng, Zhi-Cai; Xiao, Bing-Jia; Luo, Zheng-Ping; Liu, Yong; Liu, Hai-Qing; Zhao, Hai-Lin; Zeng, Long; Gong, Xian-Zu; Liang, Yun-Feng; Wan, Bao-Nian; The EAST Team

    2016-06-01

    Intrinsic error field on EAST is measured using the ‘compass scan’ technique with different n  =  1 magnetic perturbation coil configurations in ohmically heated discharges. The intrinsic error field measured using a non-resonant dominated spectrum with even connection of the upper and lower resonant magnetic perturbation coils is of the order {{b}r2,1}/{{B}\\text{T}}≃ {{10}-5} and the toroidal phase of intrinsic error field is around {{60}{^\\circ}} . A clear difference between the results using the two coil configurations, resonant and non-resonant dominated spectra, is observed. The ‘resonant’ and ‘non-resonant’ terminology is based on vacuum modeling. The penetration thresholds of the non-resonant dominated cases are much smaller than that of the resonant cases. The difference of penetration thresholds between the resonant and non-resonant cases is reduced by plasma response modeling using the MARS-F code.

  8. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    PubMed

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  9. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements.

    PubMed

    Barshan, Billur

    2008-12-15

    An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  10. Irradiance measurement errors due to the assumption of a Lambertian reference panel

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.

  11. Accountability in Higher Education: Are There "Fatal Errors" Embedded in Current U.S. Policy Affecting Higher Education?

    ERIC Educational Resources Information Center

    Grantham, Marilyn H.

    Some observers of political phenomena are referring to the 1990s as the "age of accountability." Early in the decade of the '90s, articles in periodicals, professional journals and other sources were voicing warnings about increasing public policymaker frustration with higher education and the spreading development and implementation of…

  12. A method to account for the temperature sensitivity of TCCON total column measurements

    NASA Astrophysics Data System (ADS)

    Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.

    2014-05-01

    The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References

  13. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  14. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  15. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Griffin, John C.

    2015-07-01

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  16. Measurement of straightness without Abbe error using an enhanced differential plane mirror interferometer.

    PubMed

    Jin, Tao; Ji, Hudong; Hou, Wenmei; Le, Yanfen; Shen, Lu

    2017-01-20

    This paper presents an enhanced differential plane mirror interferometer with high resolution for measuring straightness. Two sets of space symmetrical beams are used to travel through the measurement and reference arms of the straightness interferometer, which contains three specific optical devices: a Koster prism, a wedge prism assembly, and a wedge mirror assembly. Changes in the optical path in the interferometer arms caused by straightness are differential and converted into phase shift through a particular interferometer system. The interferometric beams have a completely common path and space symmetrical measurement structure. The crosstalk of the Abbe error caused by pitch, yaw, and roll angle is avoided. The dead path error is minimized, which greatly enhances the stability and accuracy of the measurement. A measurement resolution of 17.5 nm is achieved. The experimental results fit well with the theoretical analysis.

  17. Analysis of Hardened Depth Variability, Process Potential, and Measurement Error in Case Carburized Components

    NASA Astrophysics Data System (ADS)

    Rowan, Olga K.; Keil, Gary D.; Clements, Tom E.

    2014-12-01

    Hardened depth (effective case depth) measurement is one of the most commonly used methods for carburizing performance evaluation. Variation in direct hardened depth measurements is routinely assumed to represent the heat treat process variation without properly correcting for the large uncertainty frequently observed in industrial laboratory measurements. These measurement uncertainties may also invalidate application of statistical control requirements on hardened depth. Gage R&R studies were conducted at three different laboratories on shallow and deep case carburized components. The primary objectives were to understand the magnitude of the measurement uncertainty and heat treat process variability, and to evaluate practical applicability of statistical control methods to metallurgical quality assessment. It was found that ~75% of the overall hardened depth variation is attributed to the measurement error resulting from the accuracy limitation of microhardness equipment and the linear interpolation technique. The measurement error was found to be proportional to the hardened depth magnitude and may reach ~0.2 mm uncertainty at 1.3 mm nominal depth and ~0.8 mm uncertainty at 3.2mm depth. A case study was discussed to explain a methodology for analyzing a large body of hardened depth information, determination of the measurement error, and calculation of the true heat treat process variation.

  18. Measurement error of self-reported physical activity levels in New York City: assessment and correction.

    PubMed

    Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna

    2015-05-01

    Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.

  19. A field calibration method to eliminate the error caused by relative tilt on roll angle measurement

    NASA Astrophysics Data System (ADS)

    Qi, Jingya; Wang, Zhao; Huang, Junhui; Yu, Bao; Gao, Jianmin

    2016-11-01

    The roll angle measurement method based on a heterodyne interferometer is an efficient technique for its high precision and environmental noise immunity. The optical layout bases on a polarization-assisted conversion of the roll angle into an optical phase shift, read by a beam passing through the objective plate actuated by the roll rotation. The measurement sensitivity or the gain coefficient G is calibrated before. However, a relative tilt between the laser and objective plate always exist due to the tilt of the laser and the roll of the guide in the field long rail measurement. The relative tilt affect the value of G, thus result in the roll angle measurement error. In this paper, a method for field calibration of G is presented to eliminate the measurement error above. The field calibration layout turns the roll angle into an optical path change (OPC) by a rotary table. Thus, the roll angle can be obtained from the OPC read by a two-frequency interferometer. Together with the phase shift, an accurate G in field measurement can be obtained and the measurement error can be corrected. The optical system of the field calibration method is set up and the experiment results are given. Contrasted with the Renishaw XL-80 for calibration, the proposed field calibration method can obtain the accurate G in the field rail roll angle measurement.

  20. Accounting for model error in air quality forecasts: an application of 4DEnVar to the assimilation of atmospheric composition using QG-Chem 1.0

    NASA Astrophysics Data System (ADS)

    Emili, Emanuele; Gürol, Selime; Cariolle, Daniel

    2016-11-01

    Model errors play a significant role in air quality forecasts. Accounting for them in the data assimilation (DA) procedures is decisive to obtain improved forecasts. We address this issue using a reduced-order coupled chemistry-meteorology model based on quasi-geostrophic dynamics and a detailed tropospheric chemistry mechanism, which we name QG-Chem. This model has been coupled to the software library for the data assimilation Object Oriented Prediction System (OOPS) and used to assess the potential of the 4DEnVar algorithm for air quality analyses and forecasts. The assets of 4DEnVar include the possibility to deal with multivariate aspects of atmospheric chemistry and to account for model errors of a generic type. A simple diagnostic procedure for detecting model errors is proposed, based on the 4DEnVar analysis and one additional model forecast. A large number of idealized data assimilation experiments are shown for several chemical species of relevance for air quality forecasts (O3, NOx, CO and CO2) with very different atmospheric lifetimes and chemical couplings. Experiments are done both under a perfect model hypothesis and including model error through perturbation of surface chemical emissions. Some key elements of the 4DEnVar algorithm such as the ensemble size and localization are also discussed. A comparison with results of 3D-Var, widely used in operational centers, shows that, for some species, analysis and next-day forecast errors can be halved when model error is taken into account. This result was obtained using a small ensemble size, which remains affordable for most operational centers. We conclude that 4DEnVar has a promising potential for operational air quality models. We finally highlight areas that deserve further research for applying 4DEnVar to large-scale chemistry models, i.e., localization techniques, propagation of analysis covariance between DA cycles and treatment for chemical nonlinearities. QG-Chem can provide a useful tool in this

  1. Effect of sampling variation on error of rainfall variables measured by optical disdrometer

    NASA Astrophysics Data System (ADS)

    Liu, X. C.; Gao, T. C.; Liu, L.

    2012-12-01

    During the sampling process of precipitation particles by optical disdrometers, the randomness of particles and sampling variability has great impact on the accuracy of precipitation variables. Based on a marked point model of raindrop size distribution, the effect of sampling variation on drop size distribution and velocity distribution measurement using optical disdrometers are analyzed by Monte Carlo simulation. The results show that the samples number, rain rate, drop size distribution, and sampling size have different influences on the accuracy of rainfall variables. The relative errors of rainfall variables caused by sampling variation in a descending order as: water concentration, mean diameter, mass weighed mean diameter, mean volume diameter, radar reflectivity factor, and number density, which are independent with samples number basically; the relative error of rain variables are positively correlated with the margin probability, which is also positively correlated with the rain rate and the mean diameter of raindrops; the sampling size is one of the main factors that influence the margin probability, with the decreasing of sampling area, especially the decreasing of short side of sample size, the probability of margin raindrops is getting greater, hence the error of rain variables are getting greater, and the variables of median size raindrops have the maximum error. To ensure the relative error of rainfall variables measured by optical disdrometer less than 1%, the width of light beam should be at least 40 mm.

  2. Measurement error of 3D cranial landmarks of an ontogenetic sample using Computed Tomography

    PubMed Central

    Barbeito-Andrés, Jimena; Anzelmo, Marisol; Ventrice, Fernando; Sardi, Marina L.

    2012-01-01

    Background/Aim Computed Tomography (CT) is a powerful tool in craniofacial research that focuses on morphological variation. In this field, an ontogenetic approach has been taken to study the developmental sources of variation and to understand the basis of morphological evolution. This work aimed to determine measurement error (ME) in cranial CT in diverse developmental stages and to characterize how this error relates to different types of landmarks. Material and methods We used a sample of fifteen skulls ranging from 0 to 31 years. Two observers placed landmarks in each image three times. Measurement error was assessed before and after Generalized Procrustes Analysis. Results The results indicated that ME is larger in neurocranial structures, which are described mainly by type III landmarks and semilandmarks. In addition, adult and infant specimens showed the same level of ME. These results are specially relevant in the context of craniofacial growth research. Conclusion CT images have become a frequent evidence to study cranial variation. Evaluation of ME gives insight into the potential source of error in interpreting results. Neural structures present higher ME which is mainly associated to landmark localization. However, this error is irrespective of age. If landmarks are correctly selected, they can be analyzed with the same level of reliability in adults and subadults. PMID:25737840

  3. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    SciTech Connect

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  4. Study of angle measuring error mechanism caused by rotor run-outs

    NASA Astrophysics Data System (ADS)

    Lao, Dabao; Zhang, Wenying; Zhou, Weihu

    2016-11-01

    In a rotating angle measuring system, errors of grating sensor, installation and rotor run-outs will affect angle measuring error. The error caused by rotor run-outs is usually the biggest and the hardest to eliminate of them. To improve the accuracy, the table should be fabricated precisely, thus, the table system will be complicated and expensive. This paper provides a method to solve the challenge by using two gratings in the same table, whose gratings respectively grooved on end face and side face. The error mechanism of end face and side face caused by axial and radial rotor run-outs by were deduced. It can be concluded from the analysis that end face grating is sensitive when radial rotor run-outs happens, side face grating is sensitive when axial rotor run-outs happens. Due to the conclusion, combined type gratings with one end face grating and one side face grating can be used to restrain the error caused by Rotor Run-outs of table.

  5. Variance in Broad Reading Accounted for by Measures of Reading Speed Embedded within Maze and Comprehension Rate Measures

    ERIC Educational Resources Information Center

    Hale, Andrea D.; Skinner, Christopher H.; Wilhoit, Brian; Ciancio, Dennis; Morrow, Jennifer A.

    2012-01-01

    Maze and reading comprehension rate measures are calculated by using measures of reading speed and measures of accuracy (i.e., correctly selected words or answers). In sixth- and seventh-grade samples, we found that the measures of reading speed embedded within our Maze measures accounted for 50% and 39% of broad reading score (BRS) variance,…

  6. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-03-29

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia.

  7. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  8. The Correction for Attenuation Due to Measurement Error: Clarifying Concepts and Creating Confidence Sets

    ERIC Educational Resources Information Center

    Charles, Eric P.

    2005-01-01

    The correction for attenuation due to measurement error (CAME) has received many historical criticisms, most of which can be traced to the limited ability to use CAME inferentially. Past attempts to determine confidence intervals for CAME are summarized and their limitations discussed. The author suggests that inference requires confidence sets…

  9. A Study on Sixth Grade Students' Misconceptions and Errors in Spatial Measurement: Length, Area, and Volume

    ERIC Educational Resources Information Center

    Tan Sisman, Gulcin; Aksu, Meral

    2016-01-01

    The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…

  10. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  11. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  12. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  13. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  14. Sensitivity of Force Specifications to the Errors in Measuring the Interface Force

    NASA Technical Reports Server (NTRS)

    Worth, Daniel

    2000-01-01

    Force-Limited Random Vibration Testing has been applied in the last several years at the NASA Goddard Space Flight Center (GSFC) and other NASA centers for various programs at the instrument and spacecraft level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the flight environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. This paper will show the effects of some measurement and calibration errors in force gauges. In some cases, the notches in the acceleration spectrum when a random vibration test is performed with measurement errors are the same as the notches produced during a test that has no measurement errors. The paper will also present the results Of tests that were used to validate this effect. Knowing the effect of measurement errors can allow tests to continue after force gauge failures or allow dummy gauges to be used in places that are inaccessible to a force gage.

  15. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    The Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) was submitted to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and to identify possible study characteristics that are predictive of reliability variation. The meta-analysis was performed…

  16. Mixture of normal distributions in multivariate null intercept measurement error model.

    PubMed

    Aoki, Reiko; Pinto Júnior, Dorival Leão; Achcar, Jorge Alberto; Bolfarine, Heleno

    2006-01-01

    In this paper we propose the use of a multivariate null intercept measurement error model, where the true unobserved value of the covariate follows a mixture of two normal distributions. The proposed model is applied to a dental clinical trial presented in Hadgu and Koch (1999). A Bayesian approach is considered and a Gibbs Sampler is used to perform the computations.

  17. Exploring Type I and Type II Errors Using Rhizopus Sporangia Diameter Measurements.

    ERIC Educational Resources Information Center

    Smith, Robert A.; Burns, Gerard; Freud, Brian; Fenning, Stacy; Hoffman, Rosemary; Sabapathi, Durai

    2000-01-01

    Presents exercises in which students can explore Type I and Type II errors using sporangia diameter measurements as a means of differentiating between two species. Examines the influence of sample size and significance level on the outcome of the analysis. (SAH)

  18. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    ERIC Educational Resources Information Center

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  19. Covariate Measurement Error Correction for Student Growth Percentiles Using the SIMEX Method

    ERIC Educational Resources Information Center

    Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W.

    2015-01-01

    In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…

  20. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm.h-1 to 250 mm.h-1) and three di...

  1. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  2. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    SciTech Connect

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  3. Self-Test Web-Based Pure-Tone Audiometry: Validity Evaluation and Measurement Error Analysis

    PubMed Central

    Kręcicki, Tomasz

    2013-01-01

    Background Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. Objective The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. Methods The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). Results The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). Conclusions The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application. PMID:23583917

  4. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    ERIC Educational Resources Information Center

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  5. Error analysis for the ground-based microwave ozone measurements during STOIC

    NASA Technical Reports Server (NTRS)

    Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

    1995-01-01

    We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.

  6. Measurement of 2∕1 intrinsic error field of Joint TEXT tokamak.

    PubMed

    Rao, B; Ding, Y H; Yu, K X; Jin, W; Hu, Q M; Yi, B; Nan, J Y; Wang, N C; Zhang, M; Zhuang, G

    2013-04-01

    The amplitude and spatial phase of the intrinsic error field of Joint TEXT (J-TEXT) tokamak were measured by scanning the spatial phase of an externally exerted resonant magnetic perturbation and fitting the mode locking thresholds. For a typical plasma with current of 180 kA, the amplitude of the 2∕1 component of the error field at the plasma edge is measured to be 0.31 G, which is about 1.8 × 10(-5) relative to the base toroidal field. The measured spatial phase is about 317° in the specified coordinate system (r, θ, ϕ) of J-TEXT tokamak. An analytical model based on the dynamics of rotating island is developed to verify the measured phase.

  7. Measurement of 2/1 intrinsic error field of Joint TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Rao, B.; Ding, Y. H.; Yu, K. X.; Jin, W.; Hu, Q. M.; Yi, B.; Nan, J. Y.; Wang, N. C.; Zhang, M.; Zhuang, G.

    2013-04-01

    The amplitude and spatial phase of the intrinsic error field of Joint TEXT (J-TEXT) tokamak were measured by scanning the spatial phase of an externally exerted resonant magnetic perturbation and fitting the mode locking thresholds. For a typical plasma with current of 180 kA, the amplitude of the 2/1 component of the error field at the plasma edge is measured to be 0.31 G, which is about 1.8 × 10-5 relative to the base toroidal field. The measured spatial phase is about 317° in the specified coordinate system (r, θ, φ) of J-TEXT tokamak. An analytical model based on the dynamics of rotating island is developed to verify the measured phase.

  8. Measuring and Detecting Molecular Adaptation in Codon Usage Against Nonsense Errors During Protein Translation

    PubMed Central

    Gilchrist, Michael A.; Shah, Premal; Zaretzki, Russell

    2009-01-01

    Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy. PMID:19822731

  9. Bit error rate measurement above and below bit rate tracking threshold

    NASA Technical Reports Server (NTRS)

    Kobayaski, H. S.; Fowler, J.; Kurple, W. (Inventor)

    1978-01-01

    Bit error rate is measured by sending a pseudo-random noise (PRN) code test signal simulating digital data through digital equipment to be tested. An incoming signal representing the response of the equipment being tested, together with any added noise, is received and tracked by being compared with a locally generated PRN code. Once the locally generated PRN code matches the incoming signal a tracking lock is obtained. The incoming signal is then integrated and compared bit-by-bit against the locally generated PRN code and differences between bits being compared are counted as bit errors.

  10. Modification of an impulse-factoring orbital transfer technique to account for orbit determination and maneuver execution errors

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Green, R. N.; Young, G. R.; Kelly, M. G.

    1974-01-01

    A method has previously been developed to satisfy terminal rendezvous and intermediate timing constraints for planetary missions involving orbital operations. The method uses impulse factoring in which a two-impulse transfer is divided into three or four impulses which add one or two intermediate orbits. The periods of the intermediate orbits and the number of revolutions in each orbit are varied to satisfy timing constraints. Techniques are developed to retarget the orbital transfer in the presence of orbit-determination and maneuver-execution errors. Sample results indicate that the nominal transfer can be retargeted with little change in either the magnitude (Delta V) or location of the individual impulses. Additonally, the total Delta V required for the retargeted transfer is little different from that required for the nominal transfer. A digital computer program developed to implement the techniques is described.

  11. Compensating sampling errors in stabilizing helmet-mounted displays using auxiliary acceleration measurements

    NASA Technical Reports Server (NTRS)

    Merhav, S.; Velger, M.

    1991-01-01

    A method based on complementary filtering is shown to be effective in compensating for the image stabilization error due to sampling delays of HMD position and orientation measurements. These delays would otherwise have prevented the stabilization of the image in HMDs. The method is also shown to improve the resolution of the head orientation measurement, particularly at low frequencies, thus providing smoother head control commands, which are essential for precise head pointing and teleoperation.

  12. Does public reporting measure up? Federalism, accountability and child-care policy in Canada.

    PubMed

    Anderson, Lynell; Findlay, Tammy

    2010-01-01

    Governments in Canada have recently been exploring new accountability measures within intergovernmental relations. Public reporting has become the preferred mechanism in a range of policy areas, including early learning and child-care, and the authors assess its effectiveness as an accountability measure. The article is based on their experience with a community capacity-building project that considers the relationship between the public policy, funding and accountability mechanisms under the federal/provincial/territorial agreements related to child-care. The authors argue that in its current form, public reporting has not lived up to its promise of accountability to citizens. This evaluation is based on the standards that governments have set for themselves under the federal/provincial/territorial agreements, as well as guidelines set by the Public Sector Accounting Board, an independent body that develops accounting standards over time through consultation with governments.

  13. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    SciTech Connect

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  14. Automated quantitative measurements and associated error covariances for planetary image analysis

    NASA Astrophysics Data System (ADS)

    Tar, P. D.; Thacker, N. A.; Gilmour, J. D.; Jones, M. A.

    2015-07-01

    This paper presents a flexible approach for extracting measurements from planetary images based upon the newly developed linear Poisson models technique. The approach has the ability to learn surface textures then estimate the quantity of terrains exhibiting similar textures in new images. This approach is suitable for the estimation of dune field coverage or other repeating structures. Whilst other approaches exist, this method is unique for its incorporation of a comprehensive error theory, which includes contributions to uncertainty arising from training and subsequent use. The error theory is capable of producing measurement error covariances, which are essential for the scientific interpretation of measurements, i.e. for the plotting of error bars. In order to apply linear Poisson models, we demonstrate how terrains can be described using histograms created using a 'Poisson blob' image representation for capturing texture information. The validity of the method is corroborated using Monte Carlo simulations. The potential of the method is then demonstrated using terrain images created from bootstrap re-sampling of martian HiRISE data.

  15. Measurement error in two-stage analyses, with application to air pollution epidemiology.

    PubMed

    Szpiro, Adam A; Paciorek, Christopher J

    2013-12-01

    Public health researchers often estimate health effects of exposures (e.g., pollution, diet, lifestyle) that cannot be directly measured for study subjects. A common strategy in environmental epidemiology is to use a first-stage (exposure) model to estimate the exposure based on covariates and/or spatio-temporal proximity and to use predictions from the exposure model as the covariate of interest in the second-stage (health) model. This induces a complex form of measurement error. We propose an analytical framework and methodology that is robust to misspecification of the first-stage model and provides valid inference for the second-stage model parameter of interest. We decompose the measurement error into components analogous to classical and Berkson error and characterize properties of the estimator in the second-stage model if the first-stage model predictions are plugged in without correction. Specifically, we derive conditions for compatibility between the first- and second-stage models that guarantee consistency (and have direct and important real-world design implications), and we derive an asymptotic estimate of finite-sample bias when the compatibility conditions are satisfied. We propose a methodology that (1) corrects for finite-sample bias and (2) correctly estimates standard errors. We demonstrate the utility of our methodology in simulations and an example from air pollution epidemiology.

  16. Markov chain beam randomization: a study of the impact of PLANCK beam measurement errors on cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Rocha, G.; Pagano, L.; Górski, K. M.; Huffenberger, K. M.; Lawrence, C. R.; Lange, A. E.

    2010-04-01

    We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, called markov chain beam randomization (MCBR), randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic “nuisance” parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parameters as long as the beam fitting is performed after removal of 1/f noise.

  17. Analysis of vibration induced error in turbulence velocity measurements from an aircraft wing tip boom

    NASA Technical Reports Server (NTRS)

    Akkari, S. H.; Frost, W.

    1982-01-01

    The effect of rolling motion of a wing on the magnitude of error induced due to the wing vibration when measuring atmospheric turbulence with a wind probe mounted on the wing tip was investigated. The wing considered had characteristics similar to that of a B-57 Cambera aircraft, and Von Karman's cross spectrum function was used to estimate the cross-correlation of atmospheric turbulence. Although the error calculated was found to be less than that calculated when only elastic bendings and vertical motions of the wing are considered, it is still relatively large in the frequency's range close to the natural frequencies of the wing. Therefore, it is concluded that accelerometers mounted on the wing tip are needed to correct for this error, or the atmospheric velocity data must be appropriately filtered.

  18. A method of treating the non-grey error in total emittance measurements

    NASA Technical Reports Server (NTRS)

    Heaney, J. B.; Henninger, J. H.

    1971-01-01

    In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.

  19. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    SciTech Connect

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B. )

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.

  20. Study of flow rate induced measurement error in flow-through nano-hole plasmonic sensor

    PubMed Central

    Tu, Long; Huang, Liang; Wang, Tianyi; Wang, Wenhui

    2015-01-01

    Flow-through gold film perforated with periodically arrayed sub-wavelength nano-holes can cause extraordinary optical transmission (EOT), which has recently emerged as a label-free surface plasmon resonance sensor in biochemical detection by measuring the transmission spectral shift. This paper describes a systematic study of the effect of microfluidic field on the spectrum of EOT associated with the porous gold film. To detect biochemical molecules, the sub-micron-thick film is free-standing in a microfluidic field and thus subject to hydrodynamic deformation. The film deformation alone may cause spectral shift as measurement error, which is coupled with the spectral shift as real signal associated with the molecules. However, this microfluid-induced measurement error has long been overlooked in the field and needs to be identified in order to improve the measurement accuracy. Therefore, we have conducted simulation and analytic analysis to investigate how the microfluidic flow rate affects the EOT spectrum and verified the effect through experiment with a sandwiched device combining Au/Cr/Si3N4 nano-hole film and polydimethylsiloxane microchannels. We found significant spectral blue shift associated with even small flow rates, for example, 12.60 nm for 4.2 μl/min. This measurement error corresponds to 90 times the optical resolution of the current state-of-the-art commercially available spectrometer or 8400 times the limit of detection. This really severe measurement error suggests that we should pay attention to the microfluidic parameter setting for EOT-based flow-through nano-hole sensors and adopt right scheme to improve the measurement accuracy. PMID:26649131

  1. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.

    2016-05-04

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  2. An Assessment of Errors and Their Reduction in Terrestrial Laser Scanner Measurements in Marmorean Surfaces

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, Jorge

    2016-03-01

    The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.

  3. Made to Measure: College Leaders Come Together to Strengthen Institutional Accountability

    ERIC Educational Resources Information Center

    Boerner, Heather

    2015-01-01

    Since the American Association of Community Colleges (AACC) launched the Voluntary Framework of Accountability (VFA) in 2011, accountability measures have sprung up in just about every corner of higher education. There is the Integrated Postsecondary Education Data System (IPEDS), the National Governors Association's Complete to Compete program,…

  4. Measuring Resources in Education: From Accounting to the Resource Cost Model Approach. Working Paper Series.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This report describes two alternative approaches to measuring resources in K-12 education. One approach relies heavily on traditional accounting data, whereas the other draws on detailed information about the jobs and assignments of individual school personnel. It outlines the differences between accounting and economics and discusses how each…

  5. Measuring the Alignment between States' Finance and Accountability Policies: The Opportunity Gap

    ERIC Educational Resources Information Center

    Della Sala, Matthew R.; Knoeppel, Robert C.

    2015-01-01

    The research described in this paper expands on attempts to conceptualize, measure, and evaluate the degree to which states have aligned their finance systems with their respective accountability policies. State education finance and accountability policies serve as levers to provide equal educational opportunities for all students--scholars have…

  6. Enhancing the Material Control & Accounting Measurement System at the State Scientific Center of the Russian Federation - Institute for Physics and Power Engineering named after A.I. Leypunsky

    SciTech Connect

    Scherer, Carolynn P.; Bezhunov, Gennady M.; Bogdanov, Sergey A.; Gorbachev, Vyacheslav M.; Ryazanov, Boris G.; Talanov, Vladimir V.

    2012-07-11

    Nuclear material control and accounting (NMCA) system is improving under cooperation with USA national laboratories. Standard reference materials (RMs) and measurement techniques certified at IPPE level are required for: instrument calibration, verification measurements of parameters of items and materials, measurement error estimation, and quality control measurements. We present the main results for development of nuclear RMs for two uranium strata and the results for certification of three measurement techniques (MT) for U-235 mass fraction in uranium and U-235 mass in items. We present the results for developing measurement techniques for Pu-239 in PuO{sub 2}.

  7. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements

    NASA Astrophysics Data System (ADS)

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  8. Influence of sky radiance measurement errors on inversion-retrieved aerosol properties

    SciTech Connect

    Torres, B.; Toledano, C.; Cachorro, V. E.; Bennouna, Y. S.; Fuertes, D.; Gonzalez, R.; Frutos, A. M. de; Berjon, A. J.; Dubovik, O.; Goloub, P.; Podvin, T.; Blarel, L.

    2013-05-10

    Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90's, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2 Degree-Sign in pointing and 1.2 Degree-Sign field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.

  9. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  10. 78 FR 48075 - Western Pacific Fisheries; 2013 Annual Catch Limits and Accountability Measures; Correcting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-07

    ... Catch Limits and Accountability Measures; Correcting Amendment AGENCY: National Marine Fisheries Service... limit specifications for western Pacific fisheries that were published in the Federal Register on March... Pacific Fishery Management Council (Council) recommended annual catch limits for western Pacific...

  11. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  12. An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis

    NASA Technical Reports Server (NTRS)

    Wenger, David Paul

    1991-01-01

    The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.

  13. Correction for dynamic bias error in transmission measurements of void fraction

    NASA Astrophysics Data System (ADS)

    Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.

    2012-12-01

    Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.

  14. Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement

    ERIC Educational Resources Information Center

    Jennings, Jennifer; Sohn, Heeju

    2014-01-01

    How do proficiency-based accountability systems affect inequality in academic achievement? This article reconciles mixed findings in the literature by demonstrating that three factors jointly determine accountability's impact. First, by analyzing student-level data from a large urban school district, we find that when educators face accountability…

  15. On responder analyses when a continuous variable is dichotomized and measurement error is present.

    PubMed

    Kunz, Michael

    2011-02-01

    In clinical studies results are often reported as proportions of responders, i.e. the proportion of subjects who fulfill a certain response criterion is reported, although the underlying variable of interest is continuous. In this paper, we consider the situation where a subject is defined as a responder if the (error-free) continuous measurements post-treatment are below a certain fraction of (error-free) continuous measurements obtained pre-treatment. Focus is on the one-sample case, but an extension to the two-sample case is also presented. The bias of different estimates for the proportion of responders is derived and compared. In addition, an asymptotically unbiased ML-type estimate for the proportion of responders is presented. The results are illustrated using data obtained in a clinical study investigating pre-menstrual dysphoric disorder (PMDD).

  16. Measurement error analysis of the 3D four-wheel aligner

    NASA Astrophysics Data System (ADS)

    Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun

    2013-10-01

    Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.

  17. A high-accuracy roundness measurement for cylindrical components by a morphological filter considering eccentricity, probe offset, tip head radius and tilt error

    NASA Astrophysics Data System (ADS)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Zhou, Tong; Kuang, Ye

    2016-08-01

    A morphological filter is proposed to obtain a high-accuracy roundness measurement based on the four-parameter roundness measurement model, which takes into account eccentricity, probe offset, probe tip head radius and tilt error. This paper analyses the sample angle deviations caused by the four systematic errors to design a morphological filter based on the distribution of the sample angle. The effectiveness of the proposed method is verified through simulations and experiments performed with a roundness measuring machine. Compared to the morphological filter with the uniform sample angle, the accuracy of the roundness measurement can be increased by approximately 0.09 μm using the morphological filter with a non-uniform sample angle based on the four-parameter roundness measurement model, when eccentricity is above 16 μm, probe offset is approximately 1000 μm, tilt error is approximately 1″, the probe tip head radius is 1 mm and the cylindrical component radius is approximately 37 mm. The accuracy and reliability of roundness measurements are improved by using the proposed method for cylindrical components with a small radius, especially if the eccentricity and probe offset are large, and the tilt error and probe tip head radius are small. The proposed morphological filter method can be used for precision and ultra-precision roundness measurements, especially for functional assessments of roundness profiles.

  18. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  19. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  20. Mapping of error cells in clinical measure to symmetric power space.

    PubMed

    Abelman, H; Abelman, S

    2007-09-01

    During the refraction procedure, the power of the nearest equivalent sphere lens, known as the scalar power, is conserved within upper and lower bounds in the sphere (and cylinder) lens powers. Bounds are brought closer together while keeping the circle of least confusion on the retina. The sphere and cylinder powers and changes in these powers are thus dependent. Changes are depicted in the cylinder-sphere plane by error cells with one pair of parallel sides of negative gradient and the other pair aligned with the graph axis of cylinder power. Scalar power constitutes a vector space, is a meaningful ophthalmic quantity and is represented by the semi-trace of the dioptric power matrix. The purpose of this article is to map to error cells for the following: coordinates of the dioptric power matrix, its principal powers and meridians and its entries from error cells surrounding powers in sphere, cylinder and axis. Error cells in clinical measure for conserved scalar power now contain more compensatory lens powers. Such cells and their respective mappings in terms of most scientific and alternate clinical quantities now image consistently not only to the cells from where they originate but also to each other.

  1. Error characterization in iQuam SSTs using triple collocations with satellite measurements

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Ignatov, Alexander

    2016-10-01

    Various types of in situ sea surface temperature (SST) measurements have dominated during different periods of the satellite era. Their corresponding errors should be characterized to curtail the nonuniformities in calibration and validation of reprocessed historical satellite SST data. SSTs from several major in situ platform types reported in the NOAA in situ Quality Monitor (iQuam) system have been collocated with NOAA-17 Advanced Very High Resolution Radiometer (AVHRR) and Envisat Advanced Along Track Scanning Radiometer (AATSR) satellite SSTs from 2003 to 2009, produced by the European Space Agency (ESA) Climate Change Initiative (CCI) program. The standard deviations of errors in iQuam in situ and nighttime satellite CCI SSTs estimated using triple-collocation analyses are 0.75 K for ships, 0.21-0.22 K for drifters and Argo floats, 0.17 K and 0.40 K for tropical and coastal moorings, 0.35-0.38 K for AVHRR, and 0.15-0.30 K for AATSR. The distribution of in situ and satellite errors in space and time is also analyzed, along with their single-sensor error distributions.

  2. First measurements of error fields on W7-X using flux surface mapping

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less

  3. The effect of measurement error on the dose-response curve

    SciTech Connect

    Yoshimura, I. )

    1990-07-01

    In epidemiological studies for an environmental risk assessment, doses are often observed with errors. However, they have received little attention in data analysis. This paper studies the effect of measurement errors on the observed dose-response curve. Under the assumptions of the monotone likelihood ratio on errors and a monotone increasing dose-response curve, it is verified that the slope of the observed dose-response curve is likely to be gentler than the true one. The observed variance of responses are not so homogeneous as to be expected under models without errors. The estimation of parameters in a hockey-stick type dose-response curve with a threshold is considered on line of the maximum likelihood method for a functional relationship model. Numerical examples adaptable to the data in a 1986 study of the effect of air pollution that was conducted in Japan are also presented. The proposed model is proved to be suitable to the data in the example cited in this paper.

  4. Sideslip-induced static pressure errors in flight-test measurements

    NASA Technical Reports Server (NTRS)

    Parks, Edwin K.; Bach, Ralph E., Jr.; Tran, Duc

    1990-01-01

    During lateral flight-test maneuvers of a V/STOL research aircraft, large errors in static pressure were observed. An investigation of the data showed a strong correlation of the pressure record with variations in sideslip angle. The sensors for both measurements were located on a standard air-data nose boom. This paper descries an algorithm based on potential flow over a cylinder that was developed to correct the pressure record for sideslip-induced errors. In order to properly apply the correction algorithm, it was necessary to estimate and correct the lag error in the pressure system. The method developed for estimating pressure lag is based on the coupling of sideslip activity into the static ports and can be used as a standard flight-test procedure. The paper discusses the estimation procedure and presents the corrected static-pressure record for a typical lateral maneuver. It is shown that application of the correction algorithm effectifvely attenuates sideslip-induced errors.

  5. First measurements of error fields on W7-X using flux surface mapping

    SciTech Connect

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; Biedermann, Christoph; Pedersen, Thomas Sunn

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field '${\\rlap{-}\\ \\iota} =1/2$ ' magnetic configuration (${\\rlap{-}\\ \\iota} =\\iota /2\\pi $ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $\\sim 0.04$ m intrinsic island chain with a ${{130}^{\\circ}}$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.

  6. Sensitivity of Force Specifications to the Errors in Measuring the Interface Force

    NASA Technical Reports Server (NTRS)

    Worth, Daniel

    1999-01-01

    Force-Limited Random Vibration Testing has been applied in the last several years at NASA/GSFC for various programs at the instrument and system level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the operational environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. A key element in the ability to perform force-limited testing is multi-component force gauges. This paper will show how some measurement and calibration errors in force gauges are compensated for w en tie force specification is calculated. The resulting notches in the acceleration spectrum, when a random vibration test is performed, are the same as the notches produced during an uncompensated test that has no measurement errors. The paper will also present the results of tests that were used to validate this compensation. Knowing that the force specification can compensate for some measurement errors allows tests to continue after force gauge failures or allows dummy gauges to be used in places that are inaccessible.

  7. Estimation of the sampling interval error for LED measurement with a goniophotometer

    NASA Astrophysics Data System (ADS)

    Zhao, Weiqiang; Liu, Hui; Liu, Jian

    2013-06-01

    Using a goniophotometer to implant a total luminous flux measurement, an error comes from the sampling interval, especially in the situation for LED measurement. In this work, we use computer calculations to estimate the effect of sampling interval on the measuring the total luminous flux for four typical kinds of LEDs, whose spatial distributions of luminous intensity is similar to those LEDs shown in CIE 127 paper. Four basic kinds of mathematical functions are selected to simulate the distribution curves. Axial symmetric type LED and non-axial symmetric type LED are both take amount of. We consider polar angle sampling interval of 0.5°, 1°, 2°, and 5° respectively in one rotation for axial symmetric type, and consider azimuth angle sampling interval of 18°, 15°, 12°, 10° and 5° respectively for non-axial symmetric type. We noted that the error is strongly related to spatial distribution. However, for common LED light sources the calculation results show that a usage of polar angle sampling interval of 2° and azimuth angle sampling interval of 15° is recommended. The systematic error of sampling interval for a goniophotometer can be controlled at the level of 0.3%. For high precise level, the usage of polar angle sampling interval of 1° and azimuth angle sampling interval of 10° should be used.

  8. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  9. Measurement error analysis of three dimensional coordinates of tomatoes acquired using the binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Xiang, Rong

    2014-09-01

    This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.

  10. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    SciTech Connect

    Hao, Jiangang; Koester, Benjamin P.; Mckay, Timothy A.; Rykoff, Eli S.; Rozo, Eduardo; Evrard, August; Annis, James; Becker, Matthew; Busha, Michael; Gerdes, David; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  11. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  12. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  13. Error analysis of DIAL measurements of ozone by a Shuttle excimer lidar

    NASA Technical Reports Server (NTRS)

    Uchino, Osamu; Mccormick, M. Patrick; Mcmaster, Leonard R.; Swissler, Thomas J.

    1986-01-01

    Attention is given to an error analysis of DIAL measurements of stratospheric ozone from the Space Shuttle. It is shown that a transmitter system consisting of a KrF excimer laser pumping gas cells of H2 or D2 producing output wavelengths in the near UV is useful for the measurement of ozone in a 15-50-km altitude range. It is noted that for increased levels of stratospheric aerosols experienced after violent volcanic eruptions, the relative uncertainties of ozone densities will be large in the region below about 24 km.

  14. Error motion compensating tracking interferometer for the position measurement of objects with rotational degree of freedom

    NASA Astrophysics Data System (ADS)

    Holler, Mirko; Raabe, Jörg

    2015-05-01

    The nonaxial interferometric position measurement of rotating objects can be performed by imaging the laser beam of the interferometer to a rotating mirror which can be a sphere or a cylinder. This, however, requires such rotating mirrors to be centered on the axis of rotation as a wobble would result in loss of the interference signal. We present a tracking-type interferometer that performs such measurement in a general case where the rotating mirror may wobble on the axis of rotation, or even where the axis of rotation may be translating in space. Aside from tracking, meaning to measure and follow the position of the rotating mirror, the interferometric measurement errors induced by the tracking motion of the interferometer itself are optically compensated, preserving nanometric measurement accuracy. As an example, we show the application of this interferometer in a scanning x-ray tomography instrument.

  15. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  16. Re-Assessing Poverty Dynamics and State Protections in Britain and the US: The Role of Measurement Error

    ERIC Educational Resources Information Center

    Worts, Diana; Sacker, Amanda; McDonough, Peggy

    2010-01-01

    This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…

  17. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  18. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  19. Mechanistically-informed damage detection using dynamic measurements: Extended constitutive relation error

    NASA Astrophysics Data System (ADS)

    Hu, X.; Prabhu, S.; Atamturktur, S.; Cogan, S.

    2017-02-01

    Model-based damage detection entails the calibration of damage-indicative parameters in a physics-based computer model of an undamaged structural system against measurements collected from its damaged counterpart. The approach relies on the premise that changes identified in the damage-indicative parameters during calibration reveal the structural damage in the system. In model-based damage detection, model calibration has traditionally been treated as a process, solely operating on the model output without incorporating available knowledge regarding the underlying mechanistic behavior of the structural system. In this paper, the authors propose a novel approach for model-based damage detection by implementing the Extended Constitutive Relation Error (ECRE), a method developed for error localization in finite element models. The ECRE method was originally conceived to identify discrepancies between experimental measurements and model predictions for a structure in a given healthy state. Implementing ECRE for damage detection leads to the evaluation of a structure in varying healthy states and determination of discrepancy between model predictions and experiments due to damage. The authors developed an ECRE-based damage detection procedure in which the model error and structural damage are identified in two distinct steps and demonstrate feasibility of the procedure in identifying the presence, location and relative severity of damage on a scaled two-story steel frame for damage scenarios of varying type and severity.

  20. A mixture of hierarchical joint models for longitudinal data with heterogeneity, non-normality, missingness, and covariate measurement error.

    PubMed

    Huang, Yangxin; Yan, Chunning; Yin, Ping; Lu, Meixia

    2016-01-01

    Longitudinal data arise frequently in medical studies and it is a common practice to analyze such complex data with nonlinear mixed-effects (NLME) models. However, the following four issues may be critical in longitudinal data analysis. (i) A homogeneous population assumption for models may be unrealistically obscuring important features of between-subject and within-subject variations; (ii) normality assumption for model errors may not always give robust and reliable results, in particular, if the data exhibit skewness; (iii) the responses may be missing and the missingness may be nonignorable; and (iv) some covariates of interest may often be measured with substantial errors. When carrying out statistical inference in such settings, it is important to account for the effects of these data features; otherwise, erroneous or even misleading results may be produced. Inferential procedures can be complicated dramatically when these four data features arise. In this article, the Bayesian joint modeling approach based on a finite mixture of NLME joint models with skew distributions is developed to study simultaneous impact of these four data features, allowing estimates of both model parameters and class membership probabilities at population and individual levels. A real data example is analyzed to demonstrate the proposed methodologies, and to compare various scenarios-based potential models with different specifications of distributions.

  1. Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes

    ERIC Educational Resources Information Center

    Methe, Scott A.; Briesch, Amy M.; Hulac, David

    2015-01-01

    At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…

  2. Indirect measurement of machine tool motion axis error with single laser tracker

    NASA Astrophysics Data System (ADS)

    Wu, Zhaoyong; Li, Liangliang; Du, Zhengchun

    2015-02-01

    For high-precision machining, a convenient and accurate detection of motion error for machine tools is significant. Among common detection methods such as the ball-bar method, the laser tracker approach has received much more attention. As a high-accuracy measurement device, laser tracker is capable of long-distance and dynamic measurement, which increases much flexibility during the measurement process. However, existing methods are not so satisfactory in measurement cost, operability or applicability. Currently, a plausible method is called the single-station and time-sharing method, but it needs a large working area all around the machine tool, thus leaving itself not suitable for the machine tools surrounded by a protective cover. In this paper, a novel and convenient positioning error measurement approach by utilizing a single laser tracker is proposed, followed by two corresponding mathematical models including a laser-tracker base-point-coordinate model and a target-mirror-coordinates model. Also, an auxiliary apparatus for target mirrors to be placed on is designed, for which sensitivity analysis and Monte-Carlo simulation are conducted to optimize the dimension. Based on the method proposed, a real experiment using single API TRACKER 3 assisted by the auxiliary apparatus is carried out and a verification experiment using a traditional RENISHAW XL-80 interferometer is conducted under the same condition for comparison. Both results demonstrate a great increase in the Y-axis positioning error of machine tool. Theoretical and experimental studies together verify the feasibility of this method which has a more convenient operation and wider application in various kinds of machine tools.

  3. Isothermal calorimetry: impact of measurements error on heat of reaction and kinetic calculations.

    PubMed

    Papadaki, Maria; Nawada, Hosadu P; Gao, Jun; Fergusson-Rees, Andrew; Smith, Michael

    2007-04-11

    Heat flow and power compensation calorimetry measures the power generation of a reaction via an energy balance over an appropriately designed isothermal reactor. However, the measurement of the power generated by a reaction is a relative measurement, and calibrations are used to eliminate the contribution of a number of unknown factors. In this work the effect of the error in the measurement of temperature of electric power used in the calibrations and the heat transfer coefficient and baseline is assessed. It has been shown that the error in all aforementioned quantities reflects on the baseline and it can have a very serious impact on the accuracy of the measurement. The influence of the fluctuation of ambient temperature has been evaluated and a means of a correction that reduces its impact has been implemented. The temperature of dosed material is affected by the heat loses if reaction is performed at high temperature and low dosing rate. An experimental methodology is presented that can provide means of assessment of the actual temperature of the dosed material. Depending on the reacting system, the heat of evaporation could be included in the baseline, especially if non-condensable gases are produced during the course of the reaction.

  4. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    NASA Astrophysics Data System (ADS)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  5. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  6. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  7. Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement

    PubMed Central

    Jennings, Jennifer; Sohn, Heeju

    2016-01-01

    How do proficiency-based accountability systems affect inequality in academic achievement? This paper reconciles mixed findings in the literature by demonstrating that three factors jointly determine accountability's impact. First, by analyzing student-level data from a large urban school district, we find that when educators face accountability pressure, they focus attention on students closest to proficiency. We refer to this practice as educational triage, and show that the difficulty of the proficiency standard affects whether lower or higher performing students gain most on high-stakes tests used to evaluate schools. Less difficult proficiency standards decrease inequality in high-stakes achievement, while more difficult ones increase it. Second, we show that educators emphasize test-specific skills with students near proficiency, a practice that we refer to as instructional triage. As a result, the effects of accountability pressure differ across high and low-stakes tests; we find no effects on inequality in low-stakes reading and math tests of similar skills. Finally, we provide suggestive evidence that instructional triage is most pronounced in the lowest performing schools. We conclude by discussing how these findings shape our understanding of accountability's impacts on educational inequality. PMID:27122642

  8. Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement.

    PubMed

    Jennings, Jennifer; Sohn, Heeju

    2014-04-01

    How do proficiency-based accountability systems affect inequality in academic achievement? This paper reconciles mixed findings in the literature by demonstrating that three factors jointly determine accountability's impact. First, by analyzing student-level data from a large urban school district, we find that when educators face accountability pressure, they focus attention on students closest to proficiency. We refer to this practice as educational triage, and show that the difficulty of the proficiency standard affects whether lower or higher performing students gain most on high-stakes tests used to evaluate schools. Less difficult proficiency standards decrease inequality in high-stakes achievement, while more difficult ones increase it. Second, we show that educators emphasize test-specific skills with students near proficiency, a practice that we refer to as instructional triage. As a result, the effects of accountability pressure differ across high and low-stakes tests; we find no effects on inequality in low-stakes reading and math tests of similar skills. Finally, we provide suggestive evidence that instructional triage is most pronounced in the lowest performing schools. We conclude by discussing how these findings shape our understanding of accountability's impacts on educational inequality.

  9. 50 CFR 622.49 - Annual catch limits (ACLs) and accountability measures (AMs).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Annual catch limits (ACLs) and accountability measures (AMs). 622.49 Section 622.49 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF, AND SOUTH ATLANTIC Management Measures...

  10. Failing Tests: Commentary on "Adapting Educational Measurement to the Demands of Test-Based Accountability"

    ERIC Educational Resources Information Center

    Thissen, David

    2015-01-01

    In "Adapting Educational Measurement to the Demands of Test-Based Accountability" Koretz takes the time-honored engineering approach to educational measurement, identifying specific problems with current practice and proposing minimal modifications of the system to alleviate those problems. In response to that article, David Thissen…

  11. Accuracy of the European solar water heater test procedure. Part 1: Measurement errors and parameter estimates

    SciTech Connect

    Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.

    1991-01-01

    The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.

  12. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error.

    PubMed

    Xue, Hongqi; Miao, Hongyu; Wu, Hulin

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p∧4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.

  13. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error

    PubMed Central

    Xue, Hongqi; Miao, Hongyu; Wu, Hulin

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

  14. KMS fusion system resource accounting and performance measurement system for RSX11M V3. 2

    SciTech Connect

    Downward, J. G.

    1980-01-01

    Version 3.2 of the KMS FUSION accounting system is aimed at providing the user of RSX11M V3.2 with a versatile tool for measuring the performance of the operating system, tuning the system, and providing sufficient usage statistics so that the system manager can implement chargeback accounting if it is required by the installation. Sufficient hooks are provided so that the intrepid user can expand the system substantially beyond what is currently provided.

  15. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  16. Measurement uncertainty on the circular features in coordinate measurement system based on the error ellipse and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Du, Zhengchun; Zhu, Mengrui; Wu, Zhaoyong; Yang, Jianguo

    2016-12-01

    The uncertainty determination of the geometrical feature measurement for coordinate measuring machines (CMMs) is an essential part in the reliable quality control process. However, the most commonly-used methods for uncertainty assessment are difficult and require not only a large number of repeated measurements but also rich operation experience. Based on the error ellipse theory and the Monte Carlo simulation method, an uncertainty evaluation method for CMM measurements is presented. For circular features, the uncertainty evaluation model was established and extended into the use of an application of two holes’ central distance measurement through Monte Carlo Simulation. The verification experiment of the new method was conducted and results were compared with the traditional ones and they fit reasonably well, which proved the validity of the proposed method.

  17. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    NASA Technical Reports Server (NTRS)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  18. Measurement of Transmission Error Including Backlash in Angle Transmission Mechanisms for Mechatronic Systems

    NASA Astrophysics Data System (ADS)

    Ming, Aiguo; Kajitani, Makoto; Kanamori, Chisato; Ishikawa, Jiro

    The characteristics of angle transmission mechanisms exert a great influence on the servo performance in the robotic or mechatronic mechanism. Especially, the backlash of angle transmission mechanism is preferable the small amount. Recently, some new types of gear reducers with non-backlash have been developed for robots. However, the measurement and evaluation method of the backlash of gear trains has not been considered except old methods which can statically measure at only several meshing points of gears. This paper proposes an overall performance testing method of angle transmission mechanisms for the mechatronic systems. This method can measure the angle transmission error both clockwise and counterclockwise. In addition the backlash can be continuously measured in all meshing positions automatically. This system has been applied to the testing process in the production line of gear reducers for robots, and it has been effective for reducing the backlash of the gear trains.

  19. Wealth and the Accounting Period in the Measurement of Means. The Measure of Poverty, Technical Paper VI.

    ERIC Educational Resources Information Center

    Steuerle, Eugene; McClung, Nelson

    This technical study is concerned with both the statistical and policy effects of alternative definitions of poverty which result when the definition of means is altered by varying the time period (accounting period) over which income is measured or by including in the measure of means not only realized income, but also unrealized income and…

  20. Noise and measurement errors in a practical two-state quantum bit commitment protocol

    NASA Astrophysics Data System (ADS)

    Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola

    2014-05-01

    We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.