Science.gov

Sample records for account measurement errors

  1. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  2. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

  3. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718

  4. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  5. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  6. 40 CFR 97.627 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.627 Section 97.627... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  7. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  8. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 96.56 Section 96.56... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  9. 40 CFR 97.427 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.427 Section 97.427... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  10. 40 CFR 97.427 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.427 Section 97.427... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  11. 40 CFR 97.727 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.727 Section 97.727... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  12. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  13. 40 CFR 97.427 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.427 Section 97.427... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  14. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  15. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Account error. 73.37 Section 73.37... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  16. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 96.56 Section 96.56... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  17. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  18. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Account error. 73.37 Section 73.37... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  19. 40 CFR 97.727 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.727 Section 97.727... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  20. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  1. 40 CFR 97.56 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...

  2. 40 CFR 97.56 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...

  3. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Account error. 73.37 Section 73.37... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  4. 40 CFR 97.56 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...

  5. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  6. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 96.256 Section 96.256... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  7. 40 CFR 97.56 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...

  8. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 96.56 Section 96.56... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  9. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 96.56 Section 96.56... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  10. 40 CFR 97.56 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...

  11. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  12. 40 CFR 97.727 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.727 Section 97.727... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  13. 40 CFR 97.627 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.627 Section 97.627... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  14. 40 CFR 97.527 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.527 Section 97.527... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  15. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Account error. 73.37 Section 73.37... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  16. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  17. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Account error. 73.37 Section 73.37... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  18. 40 CFR 60.4156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Account error. 60.4156 Section 60.4156... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her... account. Within 10 business days of making such correction, the Administrator will notify the...

  19. 40 CFR 60.4156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Account error. 60.4156 Section 60.4156... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her... account. Within 10 business days of making such correction, the Administrator will notify the...

  20. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Season Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the...

  1. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis.

    PubMed

    Tenan, Matthew S

    2016-01-01

    Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device. PMID:27242546

  2. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis.

    PubMed

    Tenan, Matthew S

    2016-01-01

    Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.

  3. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...

  4. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR SO2 Allowance... her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  5. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...

  6. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...

  7. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...

  8. 40 CFR 97.256 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...

  9. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR SO2 Allowance... her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  10. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR SO2 Allowance... her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  11. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR SO2 Allowance... her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  12. 40 CFR 97.527 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program § 97.527... any error in any Allowance Management System account. Within 10 business days of making...

  13. 40 CFR 97.527 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program § 97.527... any error in any Allowance Management System account. Within 10 business days of making...

  14. Measuring Test Measurement Error: A General Approach

    ERIC Educational Resources Information Center

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  15. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  16. 40 CFR 97.627 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.627 Section 97.627 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR SO2 Group 1 Trading Program §...

  17. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 96.56 Section 96.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS NOX...

  18. 40 CFR 97.356 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking... motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the CAIR authorized...

  19. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the...

  20. 40 CFR 97.356 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking... motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the CAIR authorized...

  1. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the...

  2. 40 CFR 97.356 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking... motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the CAIR authorized...

  3. 40 CFR 97.356 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking... motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the CAIR authorized...

  4. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the...

  5. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the...

  6. 40 CFR 97.356 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking... motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within 10 business days of making such correction, the Administrator will notify the CAIR authorized...

  7. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  8. Effects of past and recent blood pressure and cholesterol level on coronary heart disease and stroke mortality, accounting for measurement error.

    PubMed

    Boshuizen, Hendriek C; Lanti, Mariapaola; Menotti, Alessandro; Moschandreas, Joanna; Tolonen, Hanna; Nissinen, Aulikki; Nedeljkovic, Srecko; Kafatos, Anthony; Kromhout, Daan

    2007-02-15

    The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality due to coronary heart disease and stroke among subjects aged 65 years or more from nine cohorts of the Seven Countries Study. The two-step method of Tsiatis et al. (J Am Stat Assoc 1995;90:27-37) was used to adjust for regression dilution bias, and results were compared with those obtained using more commonly applied methods of adjustment for regression dilution bias. It was found that the commonly used univariate adjustment for regression dilution bias overestimates the effects of both SBP and cholesterol compared with multivariate methods. Also, the two-step method makes better use of the information available, resulting in smaller confidence intervals. Results comparing recent and past exposure indicated that past SBP is more important than recent SBP in terms of its effect on coronary heart disease mortality, while both recent and past values seem to be important for effects of cholesterol on coronary heart disease mortality and effects of SBP on stroke mortality. Associations between serum cholesterol concentration and risk of stroke mortality are weak. PMID:17116650

  9. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  10. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  11. Pendulum Shifts, Context, Error, and Personal Accountability

    SciTech Connect

    Harold Blackman; Oren Hester

    2011-09-01

    This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.

  12. Estimating the Population Distribution of Usual 24-Hour Sodium Excretion from Timed Urine Void Specimens Using a Statistical Approach Accounting for Correlated Measurement Errors1234

    PubMed Central

    Wang, Chia-Yih; Carriquiry, Alicia L; Chen, Te-Ching; Loria, Catherine M; Pfeiffer, Christine M; Liu, Kiang; Sempos, Christopher T; Perrine, Cria G; Cogswell, Mary E

    2015-01-01

    Background: High US sodium intake and national reduction efforts necessitate developing a feasible and valid monitoring method across the distribution of low-to-high sodium intake. Objective: We examined a statistical approach using timed urine voids to estimate the population distribution of usual 24-h sodium excretion. Methods: A sample of 407 adults, aged 18–39 y (54% female, 48% black), collected each void in a separate container for 24 h; 133 repeated the procedure 4–11 d later. Four timed voids (morning, afternoon, evening, overnight) were selected from each 24-h collection. We developed gender-specific equations to calibrate total sodium excreted in each of the one-void (e.g., morning) and combined two-void (e.g., morning + afternoon) urines to 24-h sodium excretion. The calibrated sodium excretions were used to estimate the population distribution of usual 24-h sodium excretion. Participants were then randomly assigned to modeling (n = 160) or validation (n = 247) groups to examine the bias in estimated population percentiles. Results: Median bias in predicting selected percentiles (5th, 25th, 50th, 75th, 95th) of usual 24-h sodium excretion with one-void urines ranged from −367 to 284 mg (−7.7 to 12.2% of the observed usual excretions) for men and −604 to 486 mg (−14.6 to 23.7%) for women, and with two-void urines from −338 to 263 mg (−6.9 to 10.4%) and −166 to 153 mg (−4.1 to 8.1%), respectively. Four of the 6 two-void urine combinations produced no significant bias in predicting selected percentiles. Conclusions: Our approach to estimate the population usual 24-h sodium excretion, which uses calibrated timed-void sodium to account for day-to-day variation and covariance between measurement errors, produced percentile estimates with relatively low biases across low-to-high sodium excretions. This may provide a low-burden, low-cost alternative to 24-h collections in monitoring population sodium intake among healthy young adults and

  13. Errors in airborne flux measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  14. Better Stability with Measurement Errors

    NASA Astrophysics Data System (ADS)

    Argun, Aykut; Volpe, Giovanni

    2016-06-01

    Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.

  15. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  16. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  17. Noise in neural populations accounts for errors in working memory.

    PubMed

    Bays, Paul M

    2014-03-01

    Errors in short-term memory increase with the quantity of information stored, limiting the complexity of cognition and behavior. In visual memory, attempts to account for errors in terms of allocation of a limited pool of working memory resources have met with some success, but the biological basis for this cognitive architecture is unclear. An alternative perspective attributes recall errors to noise in tuned populations of neurons that encode stimulus features in spiking activity. I show that errors associated with decreasing signal strength in probabilistically spiking neurons reproduce the pattern of failures in human recall under increasing memory load. In particular, deviations from the normal distribution that are characteristic of working memory errors and have been attributed previously to guesses or variability in precision are shown to arise as a natural consequence of decoding populations of tuned neurons. Observers possess fine control over memory representations and prioritize accurate storage of behaviorally relevant information, at a cost to lower priority stimuli. I show that changing the input drive to neurons encoding a prioritized stimulus biases population activity in a manner that reproduces this empirical tradeoff in memory precision. In a task in which predictive cues indicate stimuli most probable for test, human observers use the cues in an optimal manner to maximize performance, within the constraints imposed by neural noise. PMID:24599462

  18. Correlated measurement error hampers association network inference.

    PubMed

    Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B

    2014-09-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow

  19. A manual accountability system designed to reduce operator error

    SciTech Connect

    Abramczyk, M R

    1989-01-01

    At the Savannah River Plant, the separations areas are not equipped with automated accountability systems, therefore accountability is performed manually. Several years ago, the Computer Systems Engineering group was requested to develop a computerized accountability system for the separations areas that would rely on manual entry and perform the necessary computations, adjust and maintain the books, and generate the necessary reports. In addition, the system would provide a complete audit trail and help reduce operator errors. Since the separations areas are actually divided into several material balance areas, the Computer Systems Engineering group was faced with several detailed specifications. Rather than designing a computerized accountability system for each material balance area, they designed a generic system that each area could tailor to its process. The system helps in reducing operator errors by displaying simple data entry forms, performing data validations when possible, providing field help, performing all computations, and generating the necessary reports. Many validation tables are user configurable, as well as the equations for computing transfer and inventory values. 8 figs.

  20. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  1. Errors and Uncertainty in Physics Measurement.

    ERIC Educational Resources Information Center

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  2. Error-compensation measurements on polarization qubits

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2016-06-01

    Systematic errors are inevitable in most measurements performed in real life because of imperfect measurement devices. Reducing systematic errors is crucial to ensuring the accuracy and reliability of measurement results. To this end, delicate error-compensation design is often necessary in addition to device calibration to reduce the dependence of the systematic error on the imperfection of the devices. The art of error-compensation design is well appreciated in nuclear magnetic resonance system by using composite pulses. In contrast, there are few works on reducing systematic errors in quantum optical systems. Here we propose an error-compensation design to reduce the systematic error in projective measurements on a polarization qubit. It can reduce the systematic error to the second order of the phase errors of both the half-wave plate (HWP) and the quarter-wave plate (QWP) as well as the angle error of the HWP. This technique is then applied to experiments on quantum state tomography on polarization qubits, leading to a 20-fold reduction in the systematic error. Our study may find applications in high-precision tasks in polarization optics and quantum optics.

  3. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  4. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015

  5. Error margin for antenna gain measurements

    NASA Technical Reports Server (NTRS)

    Cable, V.

    2002-01-01

    The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.

  6. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  7. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  8. Prediction with measurement errors in finite populations

    PubMed Central

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2011-01-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors. PMID:22162621

  9. A log-likelihood-gain intensity target for crystallographic phasing that accounts for experimental error.

    PubMed

    Read, Randy J; McCoy, Airlie J

    2016-03-01

    The crystallographic diffraction experiment measures Bragg intensities; crystallographic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978), Acta Cryst. A35, 517-525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124

  10. A log-likelihood-gain intensity target for crystallographic phasing that accounts for experimental error

    PubMed Central

    Read, Randy J.; McCoy, Airlie J.

    2016-01-01

    The crystallographic diffraction experiment measures Bragg intensities; crystallo­graphic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978 ▸), Acta Cryst. A35, 517–525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124

  11. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  12. Protecting weak measurements against systematic errors

    NASA Astrophysics Data System (ADS)

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-07-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.

  13. Laplace approximation in measurement error models.

    PubMed

    Battauz, Michela

    2011-05-01

    Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.

  14. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  15. Gear Transmission Error Measurement System Made Operational

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  16. Reducing Measurement Error in Student Achievement Estimation

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2008-01-01

    The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…

  17. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  18. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  19. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  20. Neutron multiplication error in TRU waste measurements

    SciTech Connect

    Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  1. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  2. Measurement error in human dental mensuration.

    PubMed

    Kieser, J A; Groeneveld, H T; McKee, J; Cameron, N

    1990-01-01

    The reliability of human odontometric data was evaluated in a sample of 60 teeth. Three observers, using their own instruments and the same definition of the mesiodistal and buccolingual dimensions were asked to repeat their measurements after 2 months. Precision, or repeatability, was analysed by means of Pearsonian correlation coefficients and mean absolute error values. Accuracy, or the absence of bias, was evaluated by means of Bland-Altman procedures and attendant Student t-tests, and also by an ANOVA procedure. The present investigation suggests that odontometric data have a high interobserver error component. Mesiodistal dimensions show greater imprecision and bias than buccolingual measurements. The results of the ANOVA suggest that bias is the result of interobserver error and is not due to the time between repeated measurements.

  3. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  4. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  5. Multiple Indicators, Multiple Causes Measurement Error Models

    PubMed Central

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535

  6. Multiple indicators, multiple causes measurement error models.

    PubMed

    Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J

    2014-11-10

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535

  7. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  8. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  9. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  10. Improving Localization Accuracy: Successive Measurements Error Modeling.

    PubMed

    Ali, Najah Abu; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle's future position and its past positions, and then propose a -order Gauss-Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss-Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle's future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  11. Accounting for sampling variability, injury under-reporting, and sensor error in concussion injury risk curves.

    PubMed

    Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B

    2015-09-18

    There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. PMID:26296855

  12. Accounting for sampling variability, injury under-reporting, and sensor error in concussion injury risk curves.

    PubMed

    Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B

    2015-09-18

    There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves.

  13. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. PMID:27566773

  14. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  15. Reconsideration of measurement of error in human motor learning.

    PubMed

    Crabtree, D A; Antrim, L R

    1988-10-01

    Human motor learning is often measured by error scores. The convention of using mean absolute error, mean constant error, and variable error shows lack of desirable parsimony and interpretability. This paper provides the background of error measurement and states criticisms of conventional methodology. A parsimonious model of error analysis is provided, along with operationalized interpretations and implications for motor learning. Teaching, interpreting, and using error scores in research may be simplified and facilitated with the model.

  16. Risk, Error and Accountability: Improving the Practice of School Leaders

    ERIC Educational Resources Information Center

    Perry, Lee-Anne

    2006-01-01

    This paper seeks to explore the notion of risk as an organisational logic within schools, the impact of contemporary accountability regimes on managing risk and then, in turn, to posit a systems-based process of risk management underpinned by a positive logic of risk. It moves through a number of steps beginning with the development of an…

  17. Criticality measurements for SNM accountability

    SciTech Connect

    Bohman, J.; Martin, E.R.; Butterfield, K.; Paternoster, R.

    1998-03-01

    Based on extensive operating experience with the Godiva IV fast metal burst assembly at Los Alamos National Laboratory, the authors were able to create data plots for reactivity worths of standard configurations at various temperatures and room return locations. These plots show that the material uncertainties in criticality measurements are within {+-} 20 grams out of the 65.4 kilogram HEU Godiva core. This is superior to active neutron well coincidence counter (AWCC) measurements. The criticality measurements have the additional advantage of not requiring disassembly of the reactor. No disassembly means the measurement takes less time--it can be done during each operation--and there is less dose to measurement personnel.

  18. Non-Gaussian error distribution of 7Li abundance measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Houston, Stephen; Ratra, Bharat

    2015-07-01

    We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.

  19. Errors Associated With Measurements from Imaging Probes

    NASA Astrophysics Data System (ADS)

    Heymsfield, A.; Bansemer, A.

    2015-12-01

    Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.

  20. Optimal control design that accounts for model mismatch errors

    SciTech Connect

    Kim, T.J.; Hull, D.G.

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  1. Bayesian conformity assessment in presence of systematic measurement errors

    NASA Astrophysics Data System (ADS)

    Carobbi, Carlo; Pennecchi, Francesca

    2016-04-01

    Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

  2. 50 CFR 648.323 - Accountability measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...

  3. Laser measurement and analysis of reposition error in polishing systems

    NASA Astrophysics Data System (ADS)

    Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying

    2015-10-01

    In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.

  4. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  5. 50 CFR 648.323 - Accountability measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Accountability measures. 648.323 Section 648.323 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC... Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...

  6. 50 CFR 648.323 - Accountability measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Accountability measures. 648.323 Section 648.323 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC... Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...

  7. Inter-tester Agreement in Refractive Error Measurements

    PubMed Central

    Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang

    2014-01-01

    Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter

  8. HB-Line Material Control and Accountability Measurements at SRS

    SciTech Connect

    Casella, V.R.

    2003-06-27

    Presently, HB-Line work at the Savannah River Site consists primarily of the stabilization and packaging of nuclear materials for storage and the characterization of materials for disposition in H-Area. In order to ensure compliance with Material Control and Accountability (MC and A) Regulations, accountability measurements are performed throughout the HB-Line processes. Accountability measurements are used to keep track of the nuclear material inventory by constantly updating the amount of material in the MBAs (Material Balance Area) and sub-MBAs. This is done by subtracting the amount of accountable material that is added to a process and by adding the amount of accountable material that is put back in storage. A Physical Inventory is taken and compared to the ''Book Value'' listed in the Nuclear Material Accounting System. The difference (BPID) in the Book Inventory minus the Physical Inventory of a sub-account for bulk material must agree within the measurement errors combined in quadrature to provide assurance that nuclear material is accounted for. This work provides an overview of HB-Line processes and accountability measurements. The Scrap Recovery Line and Neptunium-237/Plutonium-239 Oxide Line are described and sampling and analyses for Phase II are provided. Recommendations for improvements are provided to improve efficiency and cost effectiveness.

  9. Reducing Errors by Use of Redundancy in Gravity Measurements

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.

  10. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  11. The impact of response measurement error on the analysis of designed experiments

    SciTech Connect

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.

  12. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2015-12-21

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  13. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  14. Measurement Validity and Accountability for Student Learning

    ERIC Educational Resources Information Center

    Borden, Victor M. H.; Young, John W.

    2008-01-01

    In this chapter, the authors focus on issues of validity in measuring student learning as a prospective indicator of institutional effectiveness. Other chapters in this volume include reference to specific approaches to measuring student learning for accountability purposes, such as through standardized tests, authentic samples of student work,…

  15. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    ERIC Educational Resources Information Center

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  16. MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.

    SciTech Connect

    CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.

    2004-07-05

    The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.

  17. Design methodology accounting for fabrication errors in manufactured modified Fresnel lenses for controlled LED illumination.

    PubMed

    Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill

    2015-07-27

    The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology. PMID:26367631

  18. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  19. Chromosomal locus tracking with proper accounting of static and dynamic errors.

    PubMed

    Backlund, Mikael P; Joyner, Ryan; Moerner, W E

    2015-06-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors.

  20. Chromosomal locus tracking with proper accounting of static and dynamic errors

    NASA Astrophysics Data System (ADS)

    Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.

    2015-06-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors.

  1. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    PubMed

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894

  2. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  3. Thinking Scientifically: Understanding Measurement and Errors

    ERIC Educational Resources Information Center

    Alagumalai, Sivakumar

    2015-01-01

    Thinking scientifically consists of systematic observation, experiment, measurement, and the testing and modification of research questions. In effect, science is about measurement and the understanding of causation. Measurement is an integral part of science and engineering, and has pertinent implications for the human sciences. No measurement is…

  4. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies.

  5. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    PubMed

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  6. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  7. Pressure Change Measurement Leak Testing Errors

    SciTech Connect

    Pryor, Jeff M; Walker, William C

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  8. The impact of covariate measurement error on risk prediction.

    PubMed

    Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna

    2015-07-10

    In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses' Health Study. PMID:25865315

  9. Using neural nets to measure ocular refractive errors: a proposal

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-12-01

    We propose the development of a functional system for diagnosing and measuring ocular refractive errors in the human eye (astigmatism, hypermetropia and myopia) by automatically analyzing images of the human ocular globe acquired with the Hartmann-Schack (HS) technique. HS images are to be input into a system capable of recognizing the presence of a refractive error and outputting a measure of such an error. The system should pre-process and image supplied by the acquisition technique and then use artificial neural networks combined with fuzzy logic to extract the necessary information and output an automated diagnosis of the refractive errors that may be present in the ocular globe under exam.

  10. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  11. Unit of Measurement Used and Parent Medication Dosing Errors

    PubMed Central

    Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.

    2014-01-01

    BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742

  12. Chromosomal locus tracking with proper accounting of static and dynamic errors

    PubMed Central

    Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.

    2015-01-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object’s motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics (“static error”) and motion blur due to finite exposure time (“dynamic error”) on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors. PMID:26172745

  13. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  14. Non-Gaussian Error Distributions of LMC Distance Moduli Measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Ratra, Bharat

    2015-12-01

    We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.

  15. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  16. Error tolerance of topological codes with independent bit-flip and measurement errors

    NASA Astrophysics Data System (ADS)

    Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.

    2016-07-01

    Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.

  17. Measuring worst-case errors in a robot workcell

    SciTech Connect

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  18. Aerial measurement error with a dot planimeter: Some experimental estimates

    NASA Technical Reports Server (NTRS)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  19. Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior

    PubMed Central

    Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor

    2015-01-01

    Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315

  20. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344

  1. Errors Associated with the Direct Measurement of Radionuclides in Wounds

    SciTech Connect

    Hickman, D P

    2006-03-02

    Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and

  2. The $17.1 billion problem: the annual cost of measurable medical errors.

    PubMed

    Van Den Bos, Jill; Rustagi, Karan; Gray, Travis; Halford, Michael; Ziemkiewicz, Eva; Shreve, Jonathan

    2011-04-01

    At a minimum, high-quality health care is care that does not harm patients, particularly through medical errors. The first step in reducing the large number of harmful medical errors that occur today is to analyze them. We used an actuarial approach to measure the frequency and costs of measurable US medical errors, identified through medical claims data. This method focuses on the analysis of comparative rates of illness, using mathematical models to assess the risk of occurrence and to project costs to the total population. We estimate that the annual cost of measurable medical errors that harm patients was $17.1 billion in 2008. Pressure ulcers were the most common measurable medical error, followed by postoperative infections and by postlaminectomy syndrome, a condition characterized by persistent pain following back surgery. A total of ten types of errors account for more than two-thirds of the total cost of errors, and these errors should be the first targets of prevention efforts.

  3. Filter induced errors in laser anemometer measurements using counter processors

    NASA Technical Reports Server (NTRS)

    Oberle, L. G.; Seasholtz, R. G.

    1985-01-01

    Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.

  4. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy

    PubMed Central

    Gil-Pita, Roberto

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862

  5. Eddy-covariance flux errors due to biases in gas concentration measurements: origins, quantification and correction

    NASA Astrophysics Data System (ADS)

    Fratini, G.; McDermitt, D. K.; Papale, D.

    2013-08-01

    Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (www.licor.com/eddypro).

  6. Measurement uncertainty evaluation of conicity error inspected on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  7. Laser tracker error determination using a network measurement

    NASA Astrophysics Data System (ADS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-04-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.

  8. Shallow Water Geodesy: Measurements Errors During Seabed Determination

    NASA Astrophysics Data System (ADS)

    Makar, A.

    Precision determination of the seabed is important during mining of the mineral re- sources and dredging the seabed. Hydrographic measurements are the dynamic pro- cess of determination the position and the depth. There are many errors during mea- surements, which are connected with: moving the ship, vertical distribution of the sound speed, instrumentation errors of the echosounder. Using the high precision posi- tioning system does not assure high precision determination of the seabed. There have been shown and have been characterized causes and elimination methods of seabed determination errors.

  9. Beam induced vacuum measurement error in BEPC II

    NASA Astrophysics Data System (ADS)

    Huang, Tao; Xiao, Qiong; Peng, XiaoHua; Wang, HaiJing

    2011-12-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  10. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions.

  11. Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements

    USGS Publications Warehouse

    Majewski, M.S.

    1997-01-01

    Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.

  12. Electrochemically modulated separations for material accountability measurements

    SciTech Connect

    Hazelton, Sandra G.; Liezers, Martin; Naes, Benjamin E.; Arrigo, Leah M.; Duckworth, Douglas C.

    2012-07-08

    A method for the accurate and timely analysis of accountable materials is critical for safeguards measurements in nuclear fuel reprocessing plants. Non-destructive analysis (NDA) methods, such as gamma spectroscopy, are desirable for their ability to produce near real-time data. However, the high gamma background of the actinides and fission products in spent nuclear fuel limits the use of NDA for real-time online measurements. A simple approach for at-line separation of materials would facilitate the use of at-line detection methods. A promising at-line separation method for plutonium and uranium is electrochemically modulated separations (EMS). Using an electrochemical cell with an anodized glassy carbon electrode, Pu and U oxidation states can be altered by applying an appropriate voltage. Because the affinity of the actinides for the electrode depends on their oxidation states, selective deposition can be turned “on” and “off” with changes in the applied target electrode voltage. A high surface-area cell was designed in house for the separation of Pu from spent nuclear fuel. The cell is shown to capture over 1 µg of material, increasing the likelihood for gamma spectroscopic detection of Pu extracted from dissolver solutions. The large surface area of the electrode also reduces the impact of competitive interferences from some fission products. Flow rates of up to 1 mL min-1 with >50% analyte deposition efficiency are possible, allowing for rapid separations to be effected. Results from the increased surface-area EMS cell are presented, including dilute dissolver solution simulant data.

  13. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    PubMed Central

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  14. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    PubMed

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  15. A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis

    PubMed Central

    Jiao, Yan

    2016-01-01

    Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management.

  16. A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis.

    PubMed

    Hatch, Joshua; Jiao, Yan

    2016-01-01

    Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management.

  17. A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis

    PubMed Central

    Jiao, Yan

    2016-01-01

    Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963

  18. A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis.

    PubMed

    Hatch, Joshua; Jiao, Yan

    2016-01-01

    Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963

  19. 50 CFR 648.123 - Scup accountability measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Scup accountability measures. 648.123... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period closures...-landing accountability measures, by sector. In the event that a sector ACL has been exceeded and...

  20. 50 CFR 648.103 - Summer flounder accountability measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Summer flounder accountability measures... Management Measures for the Summer Flounder Fisheries § 648.103 Summer flounder accountability measures. (a... subsequent single fishing year recreational sector ACT. (d) Non-landing accountability measures, by...

  1. 50 CFR 648.103 - Summer flounder accountability measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Summer flounder accountability measures... Management Measures for the Summer Flounder Fisheries § 648.103 Summer flounder accountability measures. (a... subsequent single fishing year recreational sector ACT. (d) Non-landing accountability measures, by...

  2. 50 CFR 648.123 - Scup accountability measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Scup accountability measures. 648.123... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period closures...-landing accountability measures, by sector. In the event that a sector ACL has been exceeded and...

  3. 50 CFR 648.123 - Scup accountability measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Scup accountability measures. 648.123... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period closures... accountability measure. In the event that the commercial ACL has been exceeded and the overage has not...

  4. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Winnitoy, Susan

    2012-01-01

    measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.

  5. Optimal measurement strategies for effective suppression of drift errors

    SciTech Connect

    Yashchuk, Valeriy V.

    2009-04-16

    Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.

  6. The effect of measurement error on surveillance metrics

    SciTech Connect

    Weaver, Brian Phillip; Hamada, Michael S.

    2012-04-24

    The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.

  7. Working with Error and Uncertainty to Increase Measurement Validity

    ERIC Educational Resources Information Center

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  8. Phase error analysis and compensation considering ambient light for phase measuring profilometry

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing

    2014-04-01

    The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.

  9. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  10. Modified McLeod pressure gage eliminates measurement errors

    NASA Technical Reports Server (NTRS)

    Kells, M. C.

    1966-01-01

    Modification of a McLeod gage eliminates errors in measuring absolute pressure of gases in the vacuum range. A valve which is internal to the gage and is magnetically actuated is positioned between the mercury reservoir and the sample gas chamber.

  11. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  12. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.

  13. 50 CFR 648.143 - Black sea bass Accountability Measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a..., from a subsequent single fishing year recreational sector ACT. (c) Non-landing accountability...

  14. 50 CFR 648.143 - Black sea bass Accountability Measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a..., from a subsequent single fishing year recreational sector ACT. (c) Non-landing accountability...

  15. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  16. Error Correction for Foot Clearance in Real-Time Measurement

    NASA Astrophysics Data System (ADS)

    Wahab, Y.; Bakar, N. A.; Mazalan, M.

    2014-04-01

    Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.

  17. Automatic diagnostic system for measuring ocular refractive errors

    NASA Astrophysics Data System (ADS)

    Ventura, Liliane; Chiaradia, Caio; de Sousa, Sidney J. F.; de Castro, Jarbas C.

    1996-05-01

    Ocular refractive errors (myopia, hyperopia and astigmatism) are automatic and objectively determined by projecting a light target onto the retina using an infra-red (850 nm) diode laser. The light vergence which emerges from the eye (light scattered from the retina) is evaluated in order to determine the corresponding ametropia. The system basically consists of projecting a target (ring) onto the retina and analyzing the scattered light with a CCD camera. The light scattered by the eye is divided into six portions (3 meridians) by using a mask and a set of six prisms. The distance between the two images provided by each of the meridians, leads to the refractive error of the referred meridian. Hence, it is possible to determine the refractive error at three different meridians, which gives the exact solution for the eye's refractive error (spherical and cylindrical components and the axis of the astigmatism). The computational basis used for the image analysis is a heuristic search, which provides satisfactory calculation times for our purposes. The peculiar shape of the target, a ring, provides a wider range of measurement and also saves parts of the retina from unnecessary laser irradiation. Measurements were done in artificial and in vivo eyes (using cicloplegics) and the results were in good agreement with the retinoscopic measurements.

  18. Error reduction in gamma-spectrometric measurements of nuclear materials enrichment

    NASA Astrophysics Data System (ADS)

    Zaplatkina, D.; Semenov, A.; Tarasova, E.; Zakusilov, V.; Kuznetsov, M.

    2016-06-01

    The paper provides the analysis of the uncertainty in determining the uranium samples enrichment using non-destructive methods to ensure the functioning of the nuclear materials accounting and control system. The measurements were performed by a scintillation detector based on a sodium iodide crystal and the semiconductor germanium detector. Samples containing uranium oxide of different masses were used for the measurements. Statistical analysis of the results showed that the maximum enrichment error in a scintillation detector measurement can reach 82%. The bias correction, calculated from the data obtained by the semiconductor detector, reduces the error in the determination of uranium enrichment by 47.2% in average. Thus, the use of bias correction, calculated by the statistical methods, allows the use of scintillation detectors to account and control nuclear materials.

  19. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

    NASA Astrophysics Data System (ADS)

    Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

    2016-09-01

    To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

  20. Error and uncertainty in Raman thermal conductivity measurements

    SciTech Connect

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  1. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  2. Error reduction in retrievals of atmospheric species from symmetrically measured lidar sounding absorption spectra.

    PubMed

    Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T

    2014-10-20

    We report new methods for retrieving atmospheric constituents from symmetrically-measured lidar-sounding absorption spectra. The forward model accounts for laser line-center frequency noise and broadened line-shape, and is essentially linearized by linking estimated optical-depths to the mixing ratios. Errors from the spectral distortion and laser frequency drift are substantially reduced by averaging optical-depths at each pair of symmetric wavelength channels. Retrieval errors from measurement noise and model bias are analyzed parametrically and numerically for multiple atmospheric layers, to provide deeper insight. Errors from surface height and reflectance variations are reduced to tolerable levels by "averaging before log" with pulse-by-pulse ranging knowledge incorporated.

  3. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  4. Material Control and Accountability Measurements for FB-Line Processes

    SciTech Connect

    Casella, V.R.

    2002-12-03

    This report provides an overview of FB-Line processes and nuclear material accountability measurements. Flow diagrams for the product, waste, and packaging and stabilization processes are given along with the accountability measurements done before and after each of these processes. Brief descriptions of these measurements are provided. This information provides a better understanding of the general FB-Line processes and how MC and A measurements are used to keep track of the accountable material inventory.

  5. The effects of errors in the measurement of continuous exposure variables on the assessment of risks

    SciTech Connect

    Gilbert, E.S.

    1988-06-01

    Exposure variables in epidemiological studies are seldom measured without error. However, it is unusual for such errors to be taken into account in analyzing data, and thus distortion of results may occur. These distorting effects are evaluated for the fitting of linear and log-linear proportional hazard models based on single continuous exposure variable, and are quantified under several sets of assumptions regarding the conditional distributions of the measured exposures given the true exposures, as well as assumptions regarding the true exposure distributions. For a wide range of assumptions, it is found that the most serious consequence of ignoring error is downward bias in the estimation of regression coefficients. In addition, the shape of the dose-response function may be distorted, and variances of estimated parameters may be underestimated. Except for the case of very large errors combined with skewed exposure distributions, tests of the null hypothesis of no effect that ignore error are found to be nearly as powerful as an optimal test, available if the error structure is known. 19 refs., 3 figs., 12 tabs.

  6. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

    SciTech Connect

    PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

    1999-03-29

    All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

  7. Geometric error measurement of spiral bevel gears and data processing

    NASA Astrophysics Data System (ADS)

    Cao, Xue-mei; Cao, Qing-mei; Xu, Hao

    2008-12-01

    This paper calculates the theoretical tooth surface of spiral bevel gear and, using coordinate measuring machine, inspects the actual tooth surface, which provides an objective and quantitative method for inspecting the tooth surface of spiral bevel gears. For many reasons there are some deviations between the actual tooth surface and the theoretical tooth surface. Based on the differential geometry and space engagement theory, this paper deduces the analytical representation of theoretical tooth surface through the process of gear generation. After comparing the coordinates of the actual gear tooth surface and the theoretical tooth surface, a high-precision analysis graphics of tooth surface errors can be obtained by measuring date processing. A pair of aviation spiral bevel gears manufactured by Phoenix 800PG Grinding machine are inspected by Mahr measurement. The result of comparison of gear surface errors, inspected respectively by the method of this paper and by the Mahr's software, shows the consistency of the error distribution. The experiment verifies the validity and feasibility of the method presented in this paper.

  8. Putting reward in art: A tentative prediction error account of visual art.

    PubMed

    Van de Cruys, Sander; Wagemans, Johan

    2011-01-01

    The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations.

  9. Putting reward in art: A tentative prediction error account of visual art

    PubMed Central

    Van de Cruys, Sander; Wagemans, Johan

    2011-01-01

    The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260

  10. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  11. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    PubMed

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important.

  12. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework

    PubMed Central

    Singh, Hardeep; Sittig, Dean F

    2015-01-01

    Diagnostic errors are major contributors to harmful patient outcomes, yet they remain a relatively understudied and unmeasured area of patient safety. Although they are estimated to affect about 12 million Americans each year in ambulatory care settings alone, both the conceptual and pragmatic scientific foundation for their measurement is under-developed. Health care organizations do not have the tools and strategies to measure diagnostic safety and most have not integrated diagnostic error into their existing patient safety programs. Further progress toward reducing diagnostic errors will hinge on our ability to overcome measurement-related challenges. In order to lay a robust groundwork for measurement and monitoring techniques to ensure diagnostic safety, we recently developed a multifaceted framework to advance the science of measuring diagnostic errors (The Safer Dx framework). In this paper, we describe how the framework serves as a conceptual foundation for system-wide safety measurement, monitoring and improvement of diagnostic error. The framework accounts for the complex adaptive sociotechnical system in which diagnosis takes place (the structure), the distributed process dimensions in which diagnoses evolve beyond the doctor's visit (the process) and the outcomes of a correct and timely “safe diagnosis” as well as patient and health care outcomes (the outcomes). We posit that the Safer Dx framework can be used by a variety of stakeholders including researchers, clinicians, health care organizations and policymakers, to stimulate both retrospective and more proactive measurement of diagnostic errors. The feedback and learning that would result will help develop subsequent interventions that lead to safer diagnosis, improved value of health care delivery and improved patient outcomes. PMID:25589094

  13. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples

    PubMed Central

    Lyles, Robert H.; Van Domelen, Dane; Mitchell, Emily M.; Schisterman, Enrique F.

    2015-01-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  14. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  15. Error reduction techniques for measuring long synchrotron mirrors

    SciTech Connect

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.

  16. Modeling Behavioral Measures of Error Detection in Choice Tasks: Response Monitoring versus Conflict Monitoring

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Maier, Martin; Hubner, Ronald

    2008-01-01

    The present study investigated the mechanisms underlying error detection in the error signaling response. The authors tested between a response monitoring account and a conflict monitoring account. By implementing each account within the neural network model of N. Yeung, M. M. Botvinick, and J. D. Cohen (2004), they demonstrated that both accounts…

  17. Improving optical bench radius measurements using stage error motion data

    SciTech Connect

    Schmitz, Tony L.; Gardner, Neil; Vaughn, Matthew; Medicus, Kate; Davies, Angela

    2008-12-20

    We describe the application of a vector-based radius approach to optical bench radius measurements in the presence of imperfect stage motions. In this approach, the radius is defined using a vector equation and homogeneous transformation matrix formulism. This is in contrast to the typical technique, where the displacement between the confocal and cat's eye null positions alone is used to determine the test optic radius. An important aspect of the vector-based radius definition is the intrinsic correction for measurement biases, such as straightness errors in the stage motion and cosine misalignment between the stage and displacement gauge axis, which lead to an artificially small radius value if the traditional approach is employed. Measurement techniques and results are provided for the stage error motions, which are then combined with the setup geometry through the analysis to determine the radius of curvature for a spherical artifact. Comparisons are shown between the new vector-based radius calculation, traditional radius computation, and a low uncertainty mechanical measurement. Additionally, the measurement uncertainty for the vector-based approach is determined using Monte Carlo simulation and compared to experimental results.

  18. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Spiny dogfish Accountability Measures (AMs). 648.233 Section 648.233 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs)....

  19. 50 CFR 648.293 - Tilefish accountability measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...

  20. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Spiny dogfish Accountability Measures (AMs). 648.233 Section 648.233 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs)....

  1. 50 CFR 648.293 - Tilefish accountability measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...

  2. 50 CFR 648.163 - Bluefish Accountability Measures (AMs).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Bluefish Accountability Measures (AMs). 648.163 Section 648.163 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC... Management Measures for the Atlantic Bluefish Fishery § 648.163 Bluefish Accountability Measures (AMs)....

  3. 50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Spiny dogfish Accountability Measures (AMs). 648.233 Section 648.233 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs)....

  4. 50 CFR 648.293 - Tilefish accountability measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...

  5. 50 CFR 648.163 - Bluefish Accountability Measures (AMs).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Bluefish Accountability Measures (AMs). 648.163 Section 648.163 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC... Management Measures for the Atlantic Bluefish Fishery § 648.163 Bluefish Accountability Measures (AMs)....

  6. Mass measurement errors caused by 'local" frequency perturbations in FTICR mass spectrometry.

    PubMed

    Masselon, Christophe; Tolmachev, Aleksey V; Anderson, Gordon A; Harkewicz, Richard; Smith, Richard D

    2002-01-01

    One of the key qualities of mass spectrometric measurements for biomolecules is the mass measurement accuracy (MMA) obtained. FTICR presently provides the highest MMA over a broad m/z range. However, due to space charge effects, the achievable MMA crucially depends on the number of ions trapped in the ICR cell for a measurement. Thus, beyond some point, as the effective sensitivity and dynamic range of a measurement increase, MMA tends to decrease. While analyzing deviations from the commonly used calibration law in FTICR we have found systematic errors which are not accounted for by a "global" space charge correction approach. The analysis of these errors and their dependence on charge population and post-excite radius have led us to conclude that each ion cloud experiences a different interaction with other ion clouds. We propose a novel calibration function which is shown to provide an improvement in MMA for all the spectra studied.

  7. 50 CFR 622.49 - Accountability measures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 622.49 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF, AND SOUTH ATLANTIC Management Measures... of the prior year's quota. The applicable commercial ACLs for SWG, in gutted weight, are 7.99...

  8. 50 CFR 622.49 - Accountability measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF, AND SOUTH ATLANTIC Management Measures.... (5) Black sea bass—(i) Commercial fishery. If commercial landings, as estimated by the SRD, reach or... the recreational ACL of 409,000 lb (185,519 kg), gutted weight, and black sea bass are...

  9. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  10. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  11. Data Reconciliation and Gross Error Detection: A Filtered Measurement Test

    SciTech Connect

    Himour, Y.

    2008-06-12

    Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.

  12. On the Measurement Errors of the Joss-Waldvogel Disdrometer

    NASA Technical Reports Server (NTRS)

    Tokay, Ali; Wolff, K. R.; Bashor, Paul; Dursun, O. K.

    2003-01-01

    The Joss-Waldvogel (JW) disdrometer is considered to be a reference instrument for drop size distribution measurements. It has been widely used in many field campaigns as part of validation efforts of radar rainfall estimation. It has also been incorporated in radar rain gauge rainfall observation networks at several ground validation sites for NASA s Tropical Rainfall Measuring Mission (TRMM). It is anticipated that the Joss-Waldvogel disdrometer will be one of the key instruments for ground validation for the upcoming Global Precipitation Measurement (GPM) mission. The JW is an impact type disdrometer and has several shortcomings. One such shortcoming is that it underestimates the number of small drops in heavy rain due to the disdrometer dead time. The detection of smaller drops is also suppressed in the presence of background noise. Further, drops larger than 5.0 to 5.5 mm diameter cannot be distinguished by the disdrometer. The JW assumes that all raindrops fall at their terminal fall speed. Ignoring the influence of vertical air motion on raindrop fall speed results in errors in determining the raindrop size. Also, the bulk descriptors of rainfall that requires the fall speed of the drops will be overestimated or underestimated due to errors in measured size and assumed fall velocity. Long-term observations from a two-dimensional video disdrometer are employed to simulate the JW disdrometer and assess how it s shortcomings affect radar rainfall estimation. Data collected from collocated JW disdrometers were also incorporated in this study.

  13. Validation and Error Characterization for the Global Precipitation Measurement

    NASA Technical Reports Server (NTRS)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  14. Performance-Based Measurement: Action for Organizations and HPT Accountability

    ERIC Educational Resources Information Center

    Larbi-Apau, Josephine A.; Moseley, James L.

    2010-01-01

    Basic measurements and applications of six selected general but critical operational performance-based indicators--effectiveness, efficiency, productivity, profitability, return on investment, and benefit-cost ratio--are presented. With each measurement, goals and potential impact are explored. Errors, risks, limitations to measurements, and a…

  15. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  16. Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables

    PubMed Central

    Song, Xiao; Wang, Ching-Yun

    2014-01-01

    In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed. PMID:25663724

  17. Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors

    NASA Astrophysics Data System (ADS)

    Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.

    2016-06-01

    Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.

  18. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.

    2013-08-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable

  19. Examples of Detecting Measurement Errors with the QCRad VAP

    SciTech Connect

    Shi, Yan; Long, Charles N.

    2005-07-30

    The QCRad VAP is being developed to assess the data quality for the ARM radiation data collected at the Extended and ARCS facilities. In this study, we processed one year of radiation data, chosen at random, for each of the twenty SGP Extended Facilities to aid in determining the user configurable limits for the SGP sites. By examining yearly summary plots of the radiation data and the various test limits, we can show that the QCRad VAP is effective in identifying and detecting many different types of measurement errors. Examples of the analysis results will be shown in this poster presentation.

  20. Examiner error in curriculum-based measurement of oral reading.

    PubMed

    Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K

    2014-08-01

    Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research. PMID:25107409

  1. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  2. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  3. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGES

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  4. Temperature-measurement errors with capsule-type resistance thermometers

    NASA Astrophysics Data System (ADS)

    Gaiser, C.; Fellmuth, B.

    2013-09-01

    Inspired by a lot of discussions within the temperature-measurement community on unresolved discrepancies occurring in conjunction with the application of capsule-type resistance thermometers, PTB has performed a detailed theoretical and experimental treatment of this problem. The focus of this work lies on the investigation of errors caused by the heat conduction via the measuring electrical leads that causes a temperature difference between the sensor element and the body, the temperature of which has to be measured. In analogy to electrical networks, a model connecting thermal resistances and heat flows has been established to describe the thermal conditions within the thermometer. The model leads to the definition of new thermometer parameters, called thermal resistance and reduction factor, that have to be determined either by dedicated experiments or theoretical simulations.

  5. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  6. Plain film measurement error in acute displaced midshaft clavicle fractures

    PubMed Central

    Archer, Lori Anne; Hunt, Stephen; Squire, Daniel; Moores, Carl; Stone, Craig; O’Dea, Frank; Furey, Andrew

    2016-01-01

    Background Clavicle fractures are common and optimal treatment remains controversial. Recent literature suggests operative fixation of acute displaced mid-shaft clavicle fractures (DMCFs) shortened more than 2 cm improves outcomes. We aimed to identify correlation between plain film and computed tomography (CT) measurement of displacement and the inter- and intraobserver reliability of repeated radiographic measurements. Methods We obtained radiographs and CT scans of patients with acute DMCFs. Three orthopedic staff and 3 residents measured radiographic displacement at time zero and 2 weeks later. The CT measurements identified absolute shortening in 3 dimensions (by subtracting the length of the fractured from the intact clavicle). We then compared shortening measured on radiographs and shortening measured in 3 dimensions on CT. Interobserver and intraobserver reliability were calculated. Results We reviewed the fractures of 22 patients. Bland–Altman repeatability coefficient calculations indicated that radiograph and CT measurements of shortening could not be correlated owing to an unacceptable amount of measurement error (6 cm). Interobserver reliability for plain radiograph measurements was excellent (Cronbach α = 0.90). Likewise, intraobserver reliabilities for plain radiograph measurements as calculated with paired t tests indicated excellent correlation (p > 0.05 in all but 1 observer [p = 0.04]). Conclusion To establish shortening as an indication for DMCF fixation, reliable measurement tools are required. The low correlation between plain film and CT measurements we observed suggests further research is necessary to establish what imaging modality reliably predicts shortening. Our results indicate weak correlation between radiograph and CT measurement of acute DMCF shortening. PMID:27438054

  7. 50 CFR 660.509 - Accountability measures (season closures).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...

  8. 50 CFR 660.509 - Accountability measures (season closures).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...

  9. Effects of measurement error on estimating biological half-life

    SciTech Connect

    Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. )

    1992-10-01

    Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.

  10. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  11. Accounting for spatial correlations of the observation errors with Ensemble Kalman filters

    NASA Astrophysics Data System (ADS)

    Cosme, Emmanuel; Jean-Michel, Brankart; Clément, Ubelmann; Jacques, Verron; Pierre, Brasseur

    2013-04-01

    The standard Kalman filter observational update requires the inversion of the innovation error covariance matrix, what is often impractical. Most implementations of the Ensemble Kalman filter circumvent this difficulty assuming the diagonality of the observation error covariance matrix, what makes the analysis calculation numerically tractable. However, when observation errors are actually correlated spatially, such hypothesis leads to an inappropriate use of observations. Experiments show that the analysis state error variances yielded by the Ensemble Kalman filter can be severely underestimated. In this presentation, we describe a parameterization of the observation error covariance matrix which preserves its diagonal shape, but represents a simple first order autoregressive correlation structure of the observation errors. This parameterization is based upon an augmentation of the observation vector with gradients of observations. Numerical applications to ocean altimetry show the detrimental effects of specifying a diagonal matrix when observations errors are correlated, and how the new parameterization not only removes the detrimental effects of correlations, but also makes use of these correlations to improve the data assimilation products.

  12. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  13. Optical refractive synchronization: bit error rate analysis and measurement

    NASA Astrophysics Data System (ADS)

    Palmer, James R.

    1999-11-01

    The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

  14. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  15. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  16. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep.

    PubMed

    Crainiceanu, Ciprian M; Caffo, Brian S; Di, Chong-Zhi; Punjabi, Naresh M

    2009-06-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS.

  17. Measure against Measure: Responsibility versus Accountability in Education

    ERIC Educational Resources Information Center

    Senechal, Diana

    2013-01-01

    In education policy, practice, and discussion, we find ourselves caught between responsibility--fidelity to one's experience, conscience, and discernment--and a narrow kind of accountability. In order to preserve integrity, we (educators and leaders) must maintain independence of thought while skillfully articulating our work to the outside world.…

  18. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  19. Technical Note: Simulation of 4DCT tumor motion measurement errors

    PubMed Central

    Dou, Tai H.; Thomas, David H.; O’Connell, Dylan; Bradley, Jeffrey D.; Lamb, James M.; Low, Daniel A.

    2015-01-01

    Purpose: To determine if and by how much the commercial 4DCT protocols under- and overestimate tumor breathing motion. Methods: 1D simulations were conducted that modeled a 16-slice CT scanner and tumors moving proportionally to breathing amplitude. External breathing surrogate traces of at least 5-min duration for 50 patients were used. Breathing trace amplitudes were converted to motion by relating the nominal tumor motion to the 90th percentile breathing amplitude, reflecting motion defined by the more recent 5DCT approach. Based on clinical low-pitch helical CT acquisition, the CT detector moved according to its velocity while the tumor moved according to the breathing trace. When the CT scanner overlapped the tumor, the overlapping slices were identified as having imaged the tumor. This process was repeated starting at successive 0.1 s time bin in the breathing trace until there was insufficient breathing trace to complete the simulation. The tumor size was subtracted from the distance between the most superior and inferior tumor positions to determine the measured tumor motion for that specific simulation. The effect of the scanning parameter variation was evaluated using two commercial 4DCT protocols with different pitch values. Because clinical 4DCT scan sessions would yield a single tumor motion displacement measurement for each patient, errors in the tumor motion measurement were considered systematic. The mean of largest 5% and smallest 5% of the measured motions was selected to identify over- and underdetermined motion amplitudes, respectively. The process was repeated for tumor motions of 1–4 cm in 1 cm increments and for tumor sizes of 1–4 cm in 1 cm increments. Results: In the examined patient cohort, simulation using pitch of 0.06 showed that 30% of the patients exhibited a 5% chance of mean breathing amplitude overestimations of 47%, while 30% showed a 5% chance of mean breathing amplitude underestimations of 36%; with a separate simulation

  20. Measurement error causes scale-dependent threshold erosion of biological signals in animal movement data.

    PubMed

    Bradshaw, Corey J A; Sims, David W; Hays, Graeme C

    2007-03-01

    Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on

  1. Measurement error causes scale-dependent threshold erosion of biological signals in animal movement data.

    PubMed

    Bradshaw, Corey J A; Sims, David W; Hays, Graeme C

    2007-03-01

    Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on

  2. Comparing Different Accounts of Inversion Errors in Children's Non-Subject Wh-Questions: "What Experimental Data Can Tell Us?"

    ERIC Educational Resources Information Center

    Ambridge, Ben; Rowland, Caroline F.; Theakston, Anna L.; Tomasello, Michael

    2006-01-01

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words ("what," "who," "how" and "why"), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6-4;6. Rates of non-inversion error ("Who she is hitting?") were found not to differ by…

  3. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  4. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    NASA Technical Reports Server (NTRS)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  5. #2 - An Empirical Assessment of Exposure Measurement Error and Effect Attenuation in Bi-Pollutant Epidemiologic Models

    EPA Science Inventory

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...

  6. Three-way partitioning of sea surface temperature measurement error

    NASA Technical Reports Server (NTRS)

    Chelton, D.

    1983-01-01

    Given any set of three 2 degree binned anomaly sea surface temperature (SST) data sets by three different sensors, estimates of the mean square error of each sensor estimate is made. The above formalism performed on every possible triplet of sensors. A separate table of error estimates is then constructed for each sensor.

  7. 50 CFR 648.24 - Fishery closures and accountability measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Management Measures for the Atlantic Mackerel, Squid, and Butterfish Fisheries § 648.24 Fishery closures and accountability measures. (a) Fishery closure procedures—(1) Longfin squid. NMFS shall close the directed fishery in the EEZ for longfin squid when the Regional Administrator projects that 90 percent of the...

  8. Accounting for People: Can Business Measure Human Value?

    ERIC Educational Resources Information Center

    Workforce Economics, 1997

    1997-01-01

    Traditional business practice undervalues human capital, and most conventional accounting models reflect this inclination. The argument for more explicit measurements of human resources is simple: Improved measurement of human resources will lead to more rational and productive choices about managing human resources. The business community is…

  9. 50 CFR 648.24 - Fishery closures and accountability measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Fishery closures and accountability measures. 648.24 Section 648.24 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the Atlantic...

  10. 50 CFR 648.24 - Fishery closures and accountability measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Fishery closures and accountability measures. 648.24 Section 648.24 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the Atlantic...

  11. Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

    PubMed

    Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

    2002-06-01

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  12. Moving Beyond "Good/Bad" Student Accountability Measures: Multiple Perspectives of Accountability.

    ERIC Educational Resources Information Center

    Capper, Colleen A.; Hafner, Madeline M.; Keyes, Maureen W.

    2001-01-01

    Examines three student accountability measures (standardized tests, performance-based assessment, and structural assessment) through two different theoretical perspectives: structural functionalism and feminist poststructuralism. Educators can use various kinds of assessments in ways that maintain the status quo or support equity and justice for…

  13. On the reliability and standard errors of measurement of contrast measures from the D-KEFS.

    PubMed

    Crawford, John R; Sutherland, David; Garthwaite, Paul H

    2008-11-01

    A formula for the reliability of difference scores was used to estimate the reliability of Delis-Kaplan Executive Function System (D-KEFS; Delis et al., 2001) contrast measures from the reliabilities and correlations of their components. In turn these reliabilities were used to calculate standard errors of measurement. The majority of contrast measures had low reliabilities: of the 51 reliability coefficients calculated in the present study, none exceeded 0.7 and hence all failed to meet any of the criteria for acceptable reliability proposed by various experts in psychological measurement. The mean reliability of the contrast scores was 0.27, the median reliability was 0.30. The standard errors of measurement were large and, in many cases, equaled or were only marginally smaller than the contrast scores' standard deviations. The results suggest that, at present, D-KEFS contrast measures should not be used in neuropsychological decision making.

  14. Large-scale spatial angle measurement and the pointing error analysis

    NASA Astrophysics Data System (ADS)

    Xiao, Wen-jian; Chen, Zhi-bin; Ma, Dong-xi; Zhang, Yong; Liu, Xian-hong; Qin, Meng-ze

    2016-05-01

    A large-scale spatial angle measurement method is proposed based on inertial reference. Common measurement reference is established in inertial space, and the spatial vector coordinates of each measured axis in inertial space are measured by using autocollimation tracking and inertial measurement technology. According to the spatial coordinates of each test vector axis, the measurement of large-scale spatial angle is easily realized. The pointing error of tracking device based on the two mirrors in the measurement system is studied, and the influence of different installation errors to the pointing error is analyzed. This research can lay a foundation for error allocation, calibration and compensation for the measurement system.

  15. The simplified version of Boyle's Law leads to errors in the measurement of thoracic gas volume.

    PubMed

    Coates, A L; Desmond, K J; Demizio, D L

    1995-09-01

    When using Boyle's Law for thoracic gas volume (Vtg) measurement, it is generally assumed that the alveolar pressure (Palv) does not differ from barometric pressure (Pbar) at the start of rarefaction and compression and that the product of the change in volume and pressure (delta P x delta V) is negligibly small. In a gentle panting maneuver in which the difference between Palv and Pbar is small, errors introduced by these assumptions are likely to be small; however, this is not the case when Vtg is measured using a single vigorous inspiratory effort. Discrepancies in the Vtg between the "complex" version of Boyle's Law, which does not ignore delta P x delta V and accounts for large swings in Palv, and the "simplified" version, during both a panting maneuver and a single inspiratory effort were calculated for normal control subjects and patients with cystic fibrosis or asthma. Defining the Vtg from the complete version as "correct," the errors introduced by the simplified version ranged from -3 to +3% for the panting maneuver whereas they ranged from 2 to 9% for the inspiratory maneuver. Using the simplified equation, the Vtg for the inspiratory maneuver was 0.135 +/- 0.237 L greater (p < 0.02) than for the panting maneuver. This discrepancy disappeared when the complete equation was used. While the errors introduced by the use of the simplified version of Boyle's Law are small, they are systematic and unnecessary. PMID:7663807

  16. Electrochemically-Modulated Separations for Material Accountability Measurements

    SciTech Connect

    Arrigo, Leah M.; Liezers, Martin; Douglas, Matthew; Green, Michael A.; Farmer, Orville T.; Schwantes, Jon M.; Peper, Shane M.; Duckworth, Douglas C.

    2010-05-07

    The Safeguards community recognizes that an accurate and timely measurement of accountable material mass at the head-end of the facility is critical to a modern materials control and accountability program at fuel reprocessing plants. For material accountancy, it is critical to detect both acute and chronic diversions of nuclear materials. Therefore, both on-line nondestructive (NDA) and destructive analysis (DA) approaches are desirable. Current methods for DA involve grab sampling and laboratory based column extractions that are costly, hazardous, and time consuming. Direct on-line gamma measurements of Pu, while desirable, are not possible due to contributions from other actinide and fission products. A technology for simple, online separation of targeted materials would benefit both DA and NDA measurements.

  17. NDA accountability measurement needs in the DOE plutonium community

    SciTech Connect

    Ostenak, C.A.

    1988-08-31

    The purpose of this first ATEX report is to identify the twenty most vital nondestructive assay (NDA) accountability measurement needs in the DOE plutonium community to DOE and to contractor safeguards RandD managers in order to promote resolution of these needs. During 1987, ATEX identified sixty NDA accountability measurement problems, many of which were common to each of the DOE sites considered. These sixty problems were combined into twenty NDA accountability measurement needs that exist within five major areas: NDA ''standards'' representing various nuclear materials and matrix composition; Impure nuclear materials compounds, residues, and wastes; Product-grade nuclear materials; Nuclear materials process holdup and in-process inventory; and Nuclear materials item control and verification. 2 figs.

  18. Error analysis of Raman differential absorption lidar ozone measurements in ice clouds.

    PubMed

    Reichardt, J

    2000-11-20

    A formalism for the error treatment of lidar ozone measurements with the Raman differential absorption lidar technique is presented. In the presence of clouds wavelength-dependent multiple scattering and cloud-particle extinction are the main sources of systematic errors in ozone measurements and necessitate a correction of the measured ozone profiles. Model calculations are performed to describe the influence of cirrus and polar stratospheric clouds on the ozone. It is found that it is sufficient to account for cloud-particle scattering and Rayleigh scattering in and above the cloud; boundary-layer aerosols and the atmospheric column below the cloud can be neglected for the ozone correction. Furthermore, if the extinction coefficient of the cloud is ?0.1 km(-1), the effect in the cloud is proportional to the effective particle extinction and to a particle correction function determined in the limit of negligible molecular scattering. The particle correction function depends on the scattering behavior of the cloud particles, the cloud geometric structure, and the lidar system parameters. Because of the differential extinction of light that has undergone one or more small-angle scattering processes within the cloud, the cloud effect on ozone extends to altitudes above the cloud. The various influencing parameters imply that the particle-related ozone correction has to be calculated for each individual measurement. Examples of ozone measurements in cirrus clouds are discussed.

  19. On error sources during airborne measurements of the ambient electric field

    NASA Technical Reports Server (NTRS)

    Evteev, B. F.

    1991-01-01

    The principal sources of errors during airborne measurements of the ambient electric field and charge are addressed. Results of their analysis are presented for critical survey. It is demonstrated that the volume electric charge has to be accounted for during such measurements, that charge being generated at the airframe and wing surface by droplets of clouds and precipitation colliding with the aircraft. The local effect of that space charge depends on the flight regime (air speed, altitude, particle size, and cloud elevation). Such a dependence is displayed in the relation between the collector conductivity of the aircraft discharging circuit - on one hand, and the sum of all the residual conductivities contributing to aircraft discharge - on the other. Arguments are given in favor of variability in the aircraft electric capacitance. Techniques are suggested for measuring from factors to describe the aircraft charge.

  20. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2014-08-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  1. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

    PubMed

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-05-19

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  2. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

    PubMed Central

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-01-01

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385

  3. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

    PubMed

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-01-01

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385

  4. Materials accounting in a fast-breeder-reactor fuels-reprocessing facility: optimal allocation of measurement uncertainties

    SciTech Connect

    Dayem, H.A.; Ostenak, C.A.; Gutmacher, R.G.; Kern, E.A.; Markin, J.T.; Martinez, D.P.; Thomas, C.C. Jr.

    1982-07-01

    This report describes the conceptual design of a materials accounting system for the feed preparation and chemical separations processes of a fast breeder reactor spent-fuel reprocessing facility. For the proposed accounting system, optimization techniques are used to calculate instrument measurement uncertainties that meet four different accounting performance goals while minimizing the total development cost of instrument systems. We identify instruments that require development to meet performance goals and measurement uncertainty components that dominate the materials balance variance. Materials accounting in the feed preparation process is complicated by large in-process inventories and spent-fuel assembly inputs that are difficult to measure. To meet 8 kg of plutonium abrupt and 40 kg of plutonium protracted loss-detection goals, materials accounting in the chemical separations process requires: process tank volume and concentration measurements having a precision less than or equal to 1%; accountability and plutonium sample tank volume measurements having a precision less than or equal to 0.3%, a shortterm correlated error less than or equal to 0.04%, and a long-term correlated error less than or equal to 0.04%; and accountability and plutonium sample tank concentration measurements having a precision less than or equal to 0.4%, a short-term correlated error less than or equal to 0.1%, and a long-term correlated error less than or equal to 0.05%. The effects of process design on materials accounting are identified. Major areas of concern include the voloxidizer, the continuous dissolver, and the accountability tank.

  5. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  6. Comparing and combining data across multiple sources via integration of paired-sample data to correct for measurement error.

    PubMed

    Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve

    2012-12-10

    In biomedical research such as the development of vaccines for infectious diseases or cancer, study outcomes measured by an assay or device are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across data sources. We incorporate such adjustment in the main study by comparing and combining independent samples from different laboratories via integration of external data, collected on paired samples from the same two laboratories. We propose the following: (i) normalization of individual-level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; (ii) comparison of mean assay values between two independent samples in the main study accounting for inter-source measurement error; and (iii) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the error-prone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real enzyme-linked immunosorbent spot assay data generated by two HIV vaccine laboratories.

  7. Adapting Accountability Systems to the Limitations of Educational Measurement

    ERIC Educational Resources Information Center

    Kane, Michael

    2015-01-01

    Michael Kane writes in this article that he is in more or less complete agreement with Professor Koretz's characterization of the problem outlined in the paper published in this issue of "Measurement." Kane agrees that current testing practices are not adequate for test-based accountability (TBA) systems, but he writes that he is far…

  8. 50 CFR 660.509 - Accountability measures (season closures).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Accountability measures (season closures). 660.509 Section 660.509 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED) FISHERIES OFF WEST COAST...

  9. Determination of the resonant harmonics of the error field from dynamic magnetic measurements in a tokamak

    SciTech Connect

    Pustovitov, V. D.

    2008-01-15

    The possibility is discussed of determining the amplitude and phase of a static resonant error field in a tokamak by means of dynamic magnetic measurements. The method proposed assumes measuring the plasma response to a varying external helical magnetic field with a small (a few gauss) amplitude. The case is considered in which the plasma is probed by square pulses with a duration much longer than the time of the transition process. The plasma response is assumed to be linear, with a proportionality coefficient being dependent on the plasma state. The analysis is carried out in a standard cylindrical approximation. The model is based on Maxwell's equations and Ohm's law and is thus capable of accounting for the interaction of large-scale modes with the conducting wall of the vacuum chamber. The method can be applied to existing tokamaks.

  10. Space charge enhanced, plasma gradient induced error in satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, D. A.; Hershkowitz, N.; Dekock, J. R.; Intrator, T. P.; Lee, S-G.; Hsieh, M-K.

    1994-01-01

    In magnetospheric plasmas it is possible for plasma gradients to casue error in electric field measurements made by satellite double probes. The space charge emhanced plasma gradient induced error is discussed in general terms, the results of a laboratory experiment designed to illustrate this error are presented, and a simple expression that quantifies this error in a form that is readily applicable to satellite data is derived. The simple expression indicates that for a given probe bias current there is less error for cylindrical probes than for spherical probes. The expression also suggests that for Viking data the error is negligible.

  11. Total error vs. measurement uncertainty: revolution or evolution?

    PubMed

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  12. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  13. Evidence, exaggeration, and error in historical accounts of chaparral wildfires in California.

    PubMed

    Goforth, Brett R; Minnich, Richard A

    2007-04-01

    For more than half a century, ecologists and historians have been integrating the contemporary study of ecosystems with data gathered from historical sources to evaluate change over broad temporal and spatial scales. This approach is especially useful where ecosystems were altered before formal study as a result of natural resources management, land development, environmental pollution, and climate change. Yet, in many places, historical documents do not provide precise information, and pre-historical evidence is unavailable or has ambiguous interpretation. There are similar challenges in evaluating how the fire regime of chaparral in California has changed as a result of fire suppression management initiated at the beginning of the 20th century. Although the firestorm of October 2003 was the largest officially recorded in California (approximately 300,000 ha), historical accounts of pre-suppression wildfires have been cited as evidence that such a scale of burning was not unprecedented, suggesting the fire regime and patch mosaic in chaparral have not substantially changed. We find that the data do not support pre-suppression megafires, and that the impression of large historical wildfires is a result of imprecision and inaccuracy in the original reports, as well as a parlance that is beset with hyperbole. We underscore themes of importance for critically analyzing historical documents to evaluate ecological change. A putative 100 mile long by 10 mile wide (160 x 16 km) wildfire reported in 1889 was reconstructed to an area of chaparral approximately 40 times smaller by linking local accounts to property tax records, voter registration rolls, claimed insurance, and place names mapped with a geographical information system (GIS) which includes data from historical vegetation surveys. We also show that historical sources cited as evidence of other large chaparral wildfires are either demonstrably inaccurate or provide anecdotal information that is immaterial in the

  14. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  15. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    NASA Astrophysics Data System (ADS)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  16. Physics of locked modes in ITER: Error field limits, rotation for obviation, and measurement of field errors

    SciTech Connect

    La Haye, R.J.

    1997-02-01

    The existing theoretical and experimental basis for predicting the levels of resonant static error field at different components m,n that stop plasma rotation and produce a locked mode is reviewed. For ITER ohmic discharges, the slow rotation of the very large plasma is predicted to incur a locked mode (and subsequent disastrous large magnetic islands) at a simultaneous weighted error field ({Sigma}{sub 1}{sup 3}w{sub m1}B{sup 2}{sub rm1}){sup {1/2}}/B{sub T} {ge} 1.9 x 10{sup -5}. Here the weights w{sub m1} are empirically determined from measurements on DIII-D to be w{sub 11} = 0. 2, w{sub 21} = 1.0, and w{sub 31} = 0. 8 and point out the relative importance of different error field components. This could be greatly obviated by application of counter injected neutral beams (which adds fluid flow to the natural ohmic electron drift). The addition of 5 MW of 1 MeV beams at 45{degrees} injection would increase the error field limit by a factor of 5; 13 MW would produce a factor of 10 improvement. Co-injection beams would also be effective but not as much as counter-injection as the co direction opposes the intrinsic rotation while the counter direction adds to it. A means for measuring individual PF and TF coil total axisymmetric field error to less than 1 in 10,000 is described. This would allow alignment of coils to mm accuracy and with correction coils make possible the very low levels of error field needed.

  17. Machining Error Compensation Based on 3D Surface Model Modified by Measured Accuracy

    NASA Astrophysics Data System (ADS)

    Abe, Go; Aritoshi, Masatoshi; Tomita, Tomoki; Shirase, Keiichi

    Recently, a demand for precision machining of dies and molds with complex shapes has been increasing. Although CNC machine tools are utilized widely for machining, still machining error compensation is required to meet the increasing demand of machining accuracy. However, the machining error compensation is an operation which takes huge amount of skill, time and cost. This paper deals with a new method of the machining error compensation. The 3D surface data of the machined part is modified according to the machining error measured by CMM (Coordinate Measuring Machine). A compensated NC program is generated from the modified 3D surface data for the machining error compensation.

  18. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    SciTech Connect

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.

  19. Exposure measurement error in time-series studies of air pollution: concepts and consequences.

    PubMed Central

    Zeger, S L; Thomas, D; Dominici, F; Samet, J M; Schwartz, J; Dockery, D; Cohen, A

    2000-01-01

    Misclassification of exposure is a well-recognized inherent limitation of epidemiologic studies of disease and the environment. For many agents of interest, exposures take place over time and in multiple locations; accurately estimating the relevant exposures for an individual participant in epidemiologic studies is often daunting, particularly within the limits set by feasibility, participant burden, and cost. Researchers have taken steps to deal with the consequences of measurement error by limiting the degree of error through a study's design, estimating the degree of error using a nested validation study, and by adjusting for measurement error in statistical analyses. In this paper, we address measurement error in observational studies of air pollution and health. Because measurement error may have substantial implications for interpreting epidemiologic studies on air pollution, particularly the time-series analyses, we developed a systematic conceptual formulation of the problem of measurement error in epidemiologic studies of air pollution and then considered the consequences within this formulation. When possible, we used available relevant data to make simple estimates of measurement error effects. This paper provides an overview of measurement errors in linear regression, distinguishing two extremes of a continuum-Berkson from classical type errors, and the univariate from the multivariate predictor case. We then propose one conceptual framework for the evaluation of measurement errors in the log-linear regression used for time-series studies of particulate air pollution and mortality and identify three main components of error. We present new simple analyses of data on exposures of particulate matter < 10 microm in aerodynamic diameter from the Particle Total Exposure Assessment Methodology Study. Finally, we summarize open questions regarding measurement error and suggest the kind of additional data necessary to address them. Images Figure 1 Figure 2

  20. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    SciTech Connect

    Thomas, Edward V.; Stork, Christopher L.; Mattingly, John K.

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  1. Accounting based risk measures for not-for-profit hospitals.

    PubMed

    Smith, D G; Wheeler, J R

    1989-11-01

    This paper discusses the issues involved with determining an appropriate discount rate for not-for-profit hospitals and develops a method for computing measures of systematic risk based on a hospital's own accounting data. Data on four hospital management companies are used to demonstrate the method. Results indicate the need for sensitivity analysis in the selection of estimation methods and in the final determination of a discount rate.

  2. A Model of Discontinuous Measurement Error and Its Effects on the Probability Distribution of Flood Discharge Measurements

    NASA Astrophysics Data System (ADS)

    Potter, Kenneth W.; Walker, John F.

    1981-10-01

    Above a given threshold an indirect method is usually used to estimate flood discharges. This results in a significant increase in the standard deviation of the measurement error, a phenomenon which the authors have termed discontinuous measurement error. An error model reveals that the coefficients of variation, skewness, and kurtosis of the distribution of the measured flood discharges are significantly higher than the corresponding coefficients of the parent flood distribution. This bias has important implications with regard to flood frequency analysis.

  3. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1980-01-01

    Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.

  4. Efron-type measures of prediction error for survival analysis.

    PubMed

    Gerds, Thomas A; Schumacher, Martin

    2007-12-01

    Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548-560) to survival analysis with right-censored event times. We find that flexible rules, like artificial neural nets, classification and regression trees, or regression splines can be assessed, and compared to less flexible rules in the same data where they are developed. The methods are illustrated with data from a breast cancer trial.

  5. Comparing and Combining Data across Multiple Sources via Integration of Paired-sample Data to Correct for Measurement Error

    PubMed Central

    Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve

    2014-01-01

    Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070

  6. [A positioning error measurement method in radiotherapy based on 3D visualization].

    PubMed

    An, Ji-Ye; Li, Yue-Xi; Lu, Xu-Dong; Duan, Hui-Long

    2007-09-01

    The positioning error in radiotherapy is one of the most important factors that influence the location precision of the tumor. Based on the CT-on-rails technology, this paper describes the research on measuring the positioning error in radiotherapy by comparing the planning CT images with the treatment CT images using 3-dimension (3D) methods. It can help doctors to measure positioning errors more accurately than 2D methods. It also supports the powerful 3D interaction such as drag-dropping, rotating and picking-up the object, so that doctors can visualize and measure the positioning errors intuitively.

  7. Accountability.

    ERIC Educational Resources Information Center

    Mullen, David J., Ed.

    This monograph, prepared to assist Georgia elementary principals to better understand accountability and its implications for educational improvement, sets forth many of the theoretical and philosophical bases from which accountability is being considered. Leon M. Lessinger begins this 5-paper presentation by describing the need for accountability…

  8. Accountability.

    ERIC Educational Resources Information Center

    Lashway, Larry

    1999-01-01

    This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…

  9. Accountability.

    ERIC Educational Resources Information Center

    The Newsletter of the Comprehensive Center-Region VI, 1999

    1999-01-01

    Controversy surrounding the accountability movement is related to how the movement began in response to dissatisfaction with public schools. Opponents see it as one-sided, somewhat mean-spirited, and a threat to the professional status of teachers. Supporters argue that all other spheres of the workplace have accountability systems and that the…

  10. On the optical measurement of corneal thickness. II. The measuring conditions and sources of error.

    PubMed

    Olsen, T; Nielsen, C B; Ehlers, N

    1980-12-01

    The optical measurement of corneal thickness based on oblique viewing of the optical section of the cornea is complicated by the finite width of the incident slit beam. In this report the theoretical and practical aspects of the effect of the slit width on the thickness reading are analysed. In practice, it was not possible to make slit-width independent thickness readings which were reproducible from one observer to another. In addition, the observed slit-width error was found to vary from one patient to another. The lack of reproducible estimate of the corneal thickness is attributed to difficulties associated with an exact definition of the edges of the visible bands of the optical section, which are determined by biological properties of the cornea as well as perceptive properties of the observer. Although inter-observer errors up to 0.02 mm were found, the intra-observer error amounted to only 0.005-0.006 mm (SD) between consecutive readings. Presumably this high intra-observer reproducibility is the result of the auxiliary pin-lights used. Changes in corneal thickness, measured by the same observer, can therefore be determined with great accuracy.

  11. Systematic errors in cosmic microwave background polarization measurements

    NASA Astrophysics Data System (ADS)

    O'Dea, Daniel; Challinor, Anthony; Johnson, Bradley R.

    2007-04-01

    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Müller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors through to power spectra and cosmological parameters. The method extends previous studies to an arbitrary scan strategy, and eliminates the need for time-consuming Monte Carlo simulations in the early phases of instrument and survey design. We illustrate the method with both simple parametrized forms for the systematics and with beams based on physical-optics simulations. Example results are given in the context of next-generation experiments targeting tensor-to-scalar ratios r ~ 0.01.

  12. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  13. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  14. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  15. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  16. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    ERIC Educational Resources Information Center

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  17. Detecting bit-flip errors in a logical qubit using stabilizer measurements.

    PubMed

    Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L

    2015-04-29

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.

  18. Detecting bit-flip errors in a logical qubit using stabilizer measurements.

    PubMed

    Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L

    2015-01-01

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318

  19. A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei

    2015-10-01

    In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.

  20. The Future of Sociological Research: Measurement Errors and Their Implications.

    ERIC Educational Resources Information Center

    Blalock, H. M.

    The report deals with the relationship between measurement and data analysis procedures in sociological research. The author finds that too many measured variables exist in both theory and measurement assumptions. Since these procedures are interrelated, improvements in either or both areas are necessary. Presented are three sections: (1) specific…

  1. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE PAGES

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less

  2. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    SciTech Connect

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-23

    exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.

  3. Compensation method for the alignment angle error in pitch deviation measurement

    NASA Astrophysics Data System (ADS)

    Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei

    2016-05-01

    When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.

  4. Water Accounting Plus (WA+) - a water accounting procedure for complex river basins based on satellite measurements

    NASA Astrophysics Data System (ADS)

    Karimi, P.; Bastiaanssen, W. G. M.; Molden, D.

    2012-11-01

    Coping with the issue of water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links hydrological flows to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper we introduce Water Accounting Plus (WA+), which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use on the water cycle is described explicitly by defining land use groups with common characteristics. Analogous to financial accounting, WA+ presents four sheets including (i) a resource base sheet, (ii) a consumption sheet, (iii) a productivity sheet, and (iv) a withdrawal sheet. Every sheet encompasses a set of indicators that summarize the overall water resources situation. The impact of external (e.g. climate change) and internal influences (e.g. infrastructure building) can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used for 3 out of the 4 sheets, but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.

  5. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  6. State-independent error-disturbance trade-off for measurement operators

    NASA Astrophysics Data System (ADS)

    Zhou, S. S.; Wu, Shengjun; Chau, H. F.

    2016-05-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions - one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  7. Quantifying Error in Survey Measures of School and Classroom Environments

    ERIC Educational Resources Information Center

    Schweig, Jonathan David

    2014-01-01

    Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…

  8. Error analysis and compensation of binocular-stereo-vision measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Guo, Junjie

    2008-09-01

    Measurement errors in binocular stereo vision are analyzed. It is proved that multi-stage calibration can efficiently reduce systematic errors due to depth of field. Furthermore, for difficulty in carry-out of multi-stage calibration, the compensation methods of errors are presented in this paper. First, using standard plane template, system calibration is completed. Then, moving the cameras to different depths, multiple views are taken and 3d coordinates of special points on template are calculated. Finally, error compensation model in depth is established with least square fitting. Experiment based on CMM indicates the relative error of measurement is reduced by 5.1% with the proposed method in this paper. This is of practical value in expanding measurement range in depth and improving measurement accuracy.

  9. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Niu, Qunjie; Liang, Kun

    2016-09-01

    Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

  10. Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea

    NASA Astrophysics Data System (ADS)

    Shin, S.; Kim, Y.; Jung, C.

    2010-12-01

    The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta

  11. Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements

    NASA Astrophysics Data System (ADS)

    Deeg, H. J.

    2015-06-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.

  12. The effect of proficiency level on measurement error of range of motion

    PubMed Central

    Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari

    2016-01-01

    [Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712

  13. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  14. Microprocessor instruments for measuring nonlinear distortions; algorithms for digital processing of the measurement signal and an estimate of the errors

    SciTech Connect

    Mints, M.Ya.; Chinkov, V.N.

    1995-09-01

    Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.

  15. Ambient Temperature Changes and the Impact to Time Measurement Error

    NASA Astrophysics Data System (ADS)

    Ogrizovic, V.; Gucevic, J.; Delcev, S.

    2012-12-01

    Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.

  16. Water Accounting Plus (WA+) - a water accounting procedure for complex river basins based on satellite measurements

    NASA Astrophysics Data System (ADS)

    Karimi, P.; Bastiaanssen, W. G. M.; Molden, D.

    2013-07-01

    Coping with water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links depletion to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper, we introduce Water Accounting Plus (WA+), which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use and landscape evapotranspiration on the water cycle is described explicitly by defining land use groups with common characteristics. WA+ presents four sheets including (i) a resource base sheet, (ii) an evapotranspiration sheet, (iii) a productivity sheet, and (iv) a withdrawal sheet. Every sheet encompasses a set of indicators that summarise the overall water resources situation. The impact of external (e.g., climate change) and internal influences (e.g., infrastructure building) can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used to acquire a vast amount of required data but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.

  17. On measurements and their quality: Paper 2: Random measurement error and the power of statistical tests.

    PubMed

    Beckstead, Jason W

    2013-10-01

    This is the second in a short series of papers on measurement theory and practice with particular relevance to intervention research in nursing, midwifery, and healthcare. This paper begins with an illustration of how random measurement error decreases the power of statistical tests and a review of the roles of sample size and effect size in hypothesis testing. A simple formula is presented and discussed for calculating sample size during the planning stages of intervention studies. Finally, an approach for incorporating reliability estimates into a priori power analyses is introduced and illustrated with a practical example. The approach permits researchers to compare alternative study designs, in terms of their statistical power. An SPSS program is provided to facilitate this approach and to assist researchers in making optimal decisions when choosing among alternative study designs.

  18. Improving surface energy balance closure by reducing errors in soil heat flux measurement

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The flux plate method is the most commonly employed method for measuring soil heat flux (G) in surface energy balance studies. Although relatively simple to use, the flux plate method is susceptible to significant errors. Two of the most common errors are heat flow divergence around the plate and fa...

  19. Variability in Reliability Coefficients and the Standard Error of Measurement from School District to District.

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Qualls, Audrey L.

    1999-01-01

    Examined the stability of the standard error of measurement and the relationship between the reliability coefficient and the variance of both true scores and error scores for 170 school districts in a state. As expected, reliability coefficients varied as a function of group variability, but the variation in split-half coefficients from school to…

  20. Measuring Articulatory Error Consistency in Children with Developmental Apraxia of Speech

    ERIC Educational Resources Information Center

    Betz, Stacy K.; Stoel-Gammon, Carol

    2005-01-01

    Error inconsistency is often cited as a characteristic of children with speech disorders, particularly developmental apraxia of speech (DAS); however, few researchers operationally define error inconsistency and the definitions that do exist are not standardized across studies. This study proposes three formulas for measuring various aspects of…

  1. Stray light errors in spectral colour measurement and two rejection methods

    NASA Astrophysics Data System (ADS)

    Shen, Haiping; Pan, Jiangen; Feng, Huajun; Liu, Muqing

    2009-02-01

    The measurement errors caused by stray light of array spectrometers in the spectral colour measurement for light emitting diodes (LEDs) are studied. A stray light correction method and a filter-wheel stray light blocking technology are compared both by simulation and by experiment. The results show that the stray light may cause unacceptable measurement errors. Both the correction method and the filter-wheel technology are very effective in correcting the stray light errors for all the LEDs. The correction method needs infrared filters for white LEDs. An optimized design of the filter wheel is given.

  2. Measuring the impact of character recognition errors on downstream text analysis

    NASA Astrophysics Data System (ADS)

    Lopresti, Daniel

    2008-01-01

    Noise presents a serious challenge in optical character recognition, as well as in the downstream applications that make use of its outputs as inputs. In this paper, we describe a paradigm for measuring the impact of recognition errors on the stages of a standard text analysis pipeline: sentence boundary detection, tokenization, and part-of-speech tagging. Employing a hierarchical methodology based on approximate string matching for classifying errors, their cascading effects as they travel through the pipeline are isolated and analyzed. We present experimental results based on injecting single errors into a large corpus of test documents to study their varying impacts depending on the nature of the error and the character(s) involved. While most such errors are found to be localized, in the worst case some can have an amplifying effect that extends well beyond the site of the original error, thereby degrading the performance of the end-to-end system.

  3. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2013-04-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  4. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  5. "Deep" language disorders in nonfluent progressive Aphasia: an evaluation of the "summation" account of semantic errors across language production tasks.

    PubMed

    Tree, Jeremy J; Kay, Janice; Perfect, Timothy J

    2005-09-01

    This study focuses on the pattern of impairments seen in a new case KT, diagnosed with nonfluent progressive aphasia (NFPA), a degenerative disorder of language production. A systematic examination of KT's performance on a wide range of language production tasks (i.e., repetition, reading, spelling, spoken and written naming) determined that both written naming and repetition were better preserved than reading, spelling-to-dictation, and spoken naming. Closer examination of error performance in both reading aloud and written production revealed evidence of "deep dyslexia" and "deep dysgraphia" that has not been documented in previous cases of NFPA, and as such the present case represents the first detailed case study of this pattern of impairment in the context of progressive aphasia. An evaluation and discussion of such deep language impairment disorders in the context of other cases of NFPA has been undertaken with reference to the summation hypothesis proposed by Hillis and Caramazza (1991, 1995). It is suggested that as a principle that holds across all language production tasks, this account can encompass patterns of deep disorders thus far reported in NFPA, although other theoretical hypotheses cannot be excluded. PMID:21038271

  6. Specification test for Markov models with measurement errors*

    PubMed Central

    Kim, Seonjin; Zhao, Zhibiao

    2014-01-01

    Most existing works on specification testing assume that we have direct observations from the model of interest. We study specification testing for Markov models based on contaminated observations. The evolving model dynamics of the unobservable Markov chain is implicitly coded into the conditional distribution of the observed process. To test whether the underlying Markov chain follows a parametric model, we propose measuring the deviation between nonparametric and parametric estimates of conditional regression functions of the observed process. Specifically, we construct a nonparametric simultaneous confidence band for conditional regression functions and check whether the parametric estimate is contained within the band. PMID:25346552

  7. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  8. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    NASA Astrophysics Data System (ADS)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  9. [Value of Pressure Measurements: Methods and Sources of Errors].

    PubMed

    Rüfer, F

    2016-07-01

    Tonometry is still an essential component of diagnostic testing in glaucoma. Functional and morphological investigations can provide very detailed information about the extent of glaucomatous damage. They are useful in the early detection of glaucoma damage; when damage is manifest, they are useful in estimating the rate of progression in follow-up studies. In contrast, tonometric procedures are much less perfect and sensitive and provide no information at all about the extent of glaucoma damage. However, they often provide the first evidence that glaucoma may be present at all and they are the decisive parameter in controlling surgical or medical treatment to reduce pressure, as the reduction in intraocular pressure (IOD) is still the most common approach in treating glaucoma - in spite of our awareness of numerous other risk factors for glaucoma. There is no reason to doubt that reducing IOD is an effective therapy in many forms of glaucoma, as this has been demonstrated in numerous large epidemiological studies. Tonometric procedures have become more precise in recent years. Goldmann applanation tonometry (GAT) and pneumatonometry are widely used. There are also some areas for which the rarer forms of tonometry can be recommended. Procedures for quasi-continuous pressure measurements and, in the future, these may replace the current approach of measuring IOD at discrete time points. There are a variety of snares in clinical practice, which may lead to misinterpretation and wrong therapeutic decisions, so that these must be repeatedly emphasised. PMID:27130978

  10. Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector

    SciTech Connect

    Zhang Song; Yau, S.-T

    2007-01-01

    A structured light system using a digital video projector is widely used for 3D shape measurement. However, the nonlinear {gamma} of the projector causes the projected fringe patterns to be nonsinusoidal, which results in phase error and therefore measurement error. It has been shown that, by using a small look-up table (LUT), this type of phase error can be reduced significantly for a three-step phase-shifting algorithm. We prove that this algorithm is generic for any phase-shifting algorithm. Moreover, we propose a new LUT generation method by analyzing the captured fringe image of a flat board directly. Experiments show that this error compensation algorithm can reduce the phase error to at least 13 times smaller.

  11. Measurement error associated with surveys of fish abundance in Lake Michigan

    USGS Publications Warehouse

    Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.

    2002-01-01

    In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.

  12. Variability and Prediction of Measurement Error in Kolb's Learning Style Inventory Scores: A Reliability Generalization Study.

    ERIC Educational Resources Information Center

    Henson, Robin K.; Hwang, Dae-Yeop

    2002-01-01

    Conducted a reliability generalization study of Kolb's Learning Style Inventory (LSI; D. Kolb, 1976). Results for 34 studies indicate that internal consistency and test-retest reliabilities for LSI scores fluctuate considerably and contribute to deleterious cumulative measurement error. (SLD)

  13. Measurement and Predition Errors in Body Composition Assessment and the Search for the Perfect Prediction Equation.

    ERIC Educational Resources Information Center

    Katch, Frank I.; Katch, Victor L.

    1980-01-01

    Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)

  14. Potential errors in FTIR measurement of oxidation in ultrahigh molecular weight polyethylene implants.

    PubMed

    Shen, F W; Yu, Y J; McKellop, H

    1999-01-01

    Potential sources of error in the use of FTIR to measure the level of oxidation in ultrahigh molecular weight polyethylene acetabular cups were evaluated using cups from a hip simulator wear study with and without artificial aging, as well as cups retrieved from clinically failed hip prostheses. Oxidation was measured as a function of depth below the bearing surface using transmission FTIR on microtomed sections of the cups. To account for the variation of the thickness of the microtomed sections, oxidation was plotted as the ratio of the absorbance of the carbonyl groups to the absorbance of a reference band at 2022 cm-1. Overnight soaking in hexane reduced the apparent levels of oxidation, presumably due to the extraction of absorbed contaminants. In cups with low to moderate levels of oxidation, the reference absorption was relatively independent of the level of oxidation and was linearly proportional to the thickness of the specimens, providing reproducible oxidation ratios. However, the scatter in the reference absorption and in the apparent oxidation ratio increased with increasing levels of oxidation and was greatest for the thickest (400 microm) microtomed sections. The profiles of the oxidation ratios for a given specimen that were plotted by the present study method could be numerically adjusted to coincide with the ratios plotted using the methods of two previous investigators, providing conversion factors that are useful for comparing results among the studies.

  15. Iowa radon leukaemia study: a hierarchical population risk model for spatially correlated exposure measured with error.

    PubMed

    Smith, Brian J; Zhang, Lixun; Field, R William

    2007-11-10

    This paper presents a Bayesian model that allows for the joint prediction of county-average radon levels and estimation of the associated leukaemia risk. The methods are motivated by radon data from an epidemiologic study of residential radon in Iowa that include 2726 outdoor and indoor measurements. Prediction of county-average radon is based on a geostatistical model for the radon data which assumes an underlying continuous spatial process. In the radon model, we account for uncertainties due to incomplete spatial coverage, spatial variability, characteristic differences between homes, and detector measurement error. The predicted radon averages are, in turn, included as a covariate in Poisson models for incident cases of acute lymphocytic (ALL), acute myelogenous (AML), chronic lymphocytic (CLL), and chronic myelogenous (CML) leukaemias reported to the Iowa cancer registry from 1973 to 2002. Since radon and leukaemia risk are modelled simultaneously in our approach, the resulting risk estimates accurately reflect uncertainties in the predicted radon exposure covariate. Posterior mean (95 per cent Bayesian credible interval) estimates of the relative risk associated with a 1 pCi/L increase in radon for ALL, AML, CLL, and CML are 0.91 (0.78-1.03), 1.01 (0.92-1.12), 1.06 (0.96-1.16), and 1.12 (0.98-1.27), respectively. PMID:17373673

  16. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  17. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  18. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  19. On the errors in measuring the particle density by the light absorption method

    SciTech Connect

    Ochkin, V. N.

    2015-04-15

    The accuracy of absorption measurements of the density of particles in a given quantum state as a function of the light absorption coefficient is analyzed. Errors caused by the finite accuracy in measuring the intensity of the light passing through a medium in the presence of different types of noise in the recorded signal are considered. Optimal values of the absorption coefficient and the factors capable of multiplying errors when deviating from these values are determined.

  20. Interferometric measurement of surface shape by wavelength tuning suppressing random intensity error.

    PubMed

    Kim, Yangjin; Hibino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru

    2016-08-10

    In this research, the susceptibility of the phase-shifting algorithms to the random intensity error is formulated and estimated. The susceptibility of the random intensity error of conventional windowed phase-shifting algorithms is discussed, and the 7N-6 phase-shifting algorithm is developed to minimize the random intensity error using the characteristic polynomial theory. Finally, the surface shape of the transparent wedge plate is measured using a wavelength-tuning Fizeau interferometer and the 7N-6 algorithm. The experimental results indicate that the surface shape measurement accuracy for the transparent plate is 2.5 nm.

  1. Interferometric measurement of surface shape by wavelength tuning suppressing random intensity error.

    PubMed

    Kim, Yangjin; Hibino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru

    2016-08-10

    In this research, the susceptibility of the phase-shifting algorithms to the random intensity error is formulated and estimated. The susceptibility of the random intensity error of conventional windowed phase-shifting algorithms is discussed, and the 7N-6 phase-shifting algorithm is developed to minimize the random intensity error using the characteristic polynomial theory. Finally, the surface shape of the transparent wedge plate is measured using a wavelength-tuning Fizeau interferometer and the 7N-6 algorithm. The experimental results indicate that the surface shape measurement accuracy for the transparent plate is 2.5 nm. PMID:27534496

  2. Comparison of Transmission Error Predictions with Noise Measurements for Several Spur and Helical Gears

    NASA Technical Reports Server (NTRS)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-01-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  3. Error correction of the DEA (Digital Electronic Automation) Coordinate Measuring Machines at LLNL

    SciTech Connect

    Carter, D.L.

    1989-11-14

    LLNL uses Coordinate Measuring Machines (CMM) manufactured by Digital Electronic Automation, Inc. (DEA) to provide in-process and final measurements of various components as they are assembled and aligned for later experimentation. The machines achieve their accuracy by using real-time passive error compensation to correct for all 21 parametric error components. LLNL does its own parametric testing and downloading of error correction data into the CMM's computer. This paper describes the theory, the parametric tests, the data or map,'' and the final checkout of the machines. 4 refs., 20 figs., 3 tabs.

  4. sup 235 U accountability measurements on small samples

    SciTech Connect

    Sigg, R.A.

    1991-01-01

    Savannah River Site (SRS) is improving uranium accountability at its fuel fabrication facility through measurements of {sup 235}U in samples taken from uranium/aluminum alloy melts. Since area personnel desired a method that would minimize mixed waste, low volume samples are prepared from dissolutions of production melt grab samples. The solution assay monitor (SAM) analyzes for {sup 235}U gamm-rays by using a high-efficiency germanium well detector. The detector's high counting efficiency permits analysis of small samples (7 mL) from these dissolutions, and the counting geometry minimizes sample geometry uncertainties. Counting each sample for thirty minutes delivers excellent precision across the calibration range of 3 to 12 g uranium per liter. As shown by interlaboratory calibration, the gamma-ray spectrometer provides overall (counting, calibration, geometric,...) uncertainties less than 0.7% one sigma. Gamma-rays from a reference source, used to provide live-time corrections, are collimated to avoid absorption by the sample in the detector well. Since sample masses are small, minor self-attenuation corrections are calculated from chemical composition data rather than determined in separate transmission measurements. This avoids employing short-lived transmission sources for self-attenuation corrections.

  5. {sup 235}U accountability measurements on small samples

    SciTech Connect

    Sigg, R.A.

    1991-12-31

    Savannah River Site (SRS) is improving uranium accountability at its fuel fabrication facility through measurements of {sup 235}U in samples taken from uranium/aluminum alloy melts. Since area personnel desired a method that would minimize mixed waste, low volume samples are prepared from dissolutions of production melt grab samples. The solution assay monitor (SAM) analyzes for {sup 235}U gamm-rays by using a high-efficiency germanium well detector. The detector`s high counting efficiency permits analysis of small samples (7 mL) from these dissolutions, and the counting geometry minimizes sample geometry uncertainties. Counting each sample for thirty minutes delivers excellent precision across the calibration range of 3 to 12 g uranium per liter. As shown by interlaboratory calibration, the gamma-ray spectrometer provides overall (counting, calibration, geometric,...) uncertainties less than 0.7% one sigma. Gamma-rays from a reference source, used to provide live-time corrections, are collimated to avoid absorption by the sample in the detector well. Since sample masses are small, minor self-attenuation corrections are calculated from chemical composition data rather than determined in separate transmission measurements. This avoids employing short-lived transmission sources for self-attenuation corrections.

  6. [Potential errors in measuring tree transpiration based on thermal dissipation method].

    PubMed

    Liu, Qing-Xin; Meng, Ping; Zhang, Jin-Song; Gao, Jun; Huang, Hui; Sun, Shou-Jia; Lu, Sen

    2011-12-01

    Transpiration is a major component of vegetation evapotranspiration, and a core in the study of plant water physiological ecology. Its measurement methods attracted extensive attention, among which, thermal dissipation is considered as an optimal method for measuring tree transpiration. Numerous studies showed that thermal dissipation method was relatively accurate in measuring individual tree transpiration and stand-scale water consumption. However, there exist potential errors between the true value and the measurements during measurement process. In this paper, the potential errors of thermal dissipation method in measuring sap flux density and of the temperature difference determination from single tree to stand-scale were reviewed, and the research prospects on the potential errors of thermal dissipation method in China were discussed. The corresponding solutions were also proposed.

  7. Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.

    PubMed

    Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R

    2015-01-01

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications.

  8. Statistical and systematic errors in redshift-space distortion measurements from large surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

    2012-12-01

    We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

  9. Error Analysis of Cine Phase Contrast MRI Velocity Measurements used for Strain Calculation

    PubMed Central

    Jensen, Elisabeth R.; Morrow, Duane A.; Felmlee, Joel P.; Odegard, Gregory M.; Kaufman, Kenton R.

    2014-01-01

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4mm/s after removal of systematic error – a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3mm/s. Measured random error was between 1 to 1.4mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. PMID:25433567

  10. A scale for measuring the severity of diagnostic errors in accident and emergency departments.

    PubMed Central

    Guly, H R

    1997-01-01

    OBJECTIVE: To design and test a simple scale for measuring the severity of diagnostic errors occurring in accident and emergency (A&E) departments. METHODS: Empirical design of a scale which indicates the severity of errors on a scale of 1 to 7. It is obtained by adding two scores which indicate the additional treatment which a patient would have received and the follow up which would have been organised if the correct diagnosis had been made initially. RESULTS: The misdiagnosis severity score (MSS) revealed 166 diagnostic errors in injuries treated in an A&E department over one years. The scoring system allowed the more significant errors to be separated from the less significant ones. CONCLUSIONS: The MSS proved useful in describing the errors made in an A&E department. Images Figure 1 PMID:9315928

  11. Displacement sensor with controlled measuring force and its error analysis and precision verification

    NASA Astrophysics Data System (ADS)

    Yang, Liangen; Wang, Xuanze; Lv, Wei

    2011-05-01

    A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.

  12. Displacement sensor with controlled measuring force and its error analysis and precision verification

    NASA Astrophysics Data System (ADS)

    Yang, Liangen; Wang, Xuanze; Lv, Wei

    2010-12-01

    A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.

  13. The estimation error covariance matrix for the ideal state reconstructor with measurement noise

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1988-01-01

    A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

  14. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  15. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  16. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2013-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  17. Alternative methods of accounting for underreporting and overreporting when measuring dietary intake-obesity relations.

    PubMed

    Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A

    2011-02-15

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302

  18. Continuous glucose monitoring in newborn infants: how do errors in calibration measurements affect detected hypoglycemia?

    PubMed

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey

    2014-05-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data.

  19. Task committee on experimental uncertainty and measurement errors in hydraulic engineering: An update

    USGS Publications Warehouse

    Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.

    2005-01-01

    As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.

  20. Measurement of Turbulence with Acoustic Doppler Current Profilers - Sources of Error and Laboratory Results

    USGS Publications Warehouse

    Nystrom, E.A.; Oberg, K.A.; Rehmann, C.R.; ,

    2002-01-01

    Acoustic Doppler current profilers (ADCPs) provide a promising method for measuring surface-water turbulence because they can provide data from a large spatial range in a relatively short time with relative ease. Some potential sources of errors in turbulence measurements made with ADCPs include inaccuracy of Doppler-shift measurements, poor temporal and spatial measurement resolution, and inaccuracy of multi-dimensional velocities resolved from one-dimensional velocities measured at separate locations. Results from laboratory measurements of mean velocity and turbulence statistics made with two pulse-coherent ADCPs in 0.87 meters of water are used to illustrate several of inherent sources of error in ADCP turbulence measurements. Results show that processing algorithms and beam configurations have important effects on turbulence measurements. ADCPs can provide reasonable estimates of many turbulence parameters; however, the accuracy of turbulence measurements made with commercially available ADCPs is often poor in comparison to standard measurement techniques.

  1. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  2. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  3. Error reduction by combining strapdown inertial measurement units in a baseball stitch

    NASA Astrophysics Data System (ADS)

    Tracy, Leah

    A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.

  4. Error reduction methods for integrated-path differential-absorption lidar measurements.

    PubMed

    Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T

    2012-07-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  5. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    NASA Astrophysics Data System (ADS)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang; Hwang, Ching-Shiang

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  6. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information.

    PubMed

    Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S

    2016-02-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  7. Effect of patient positions on measurement errors of the knee-joint space on radiographs

    NASA Astrophysics Data System (ADS)

    Gilewska, Grazyna

    2001-08-01

    Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.

  8. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information.

    PubMed

    Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S

    2016-02-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  9. The Measuring Instrument of Plumb Coaxial Error for Longdistance Orifices Based on Laser Collimation

    NASA Astrophysics Data System (ADS)

    Liu, B.; Yu, M. Y.

    2006-10-01

    Introduce the measuring instrument of plumb coaxial error for long-distance orifices which is according to the measuring requests of Flange Place of experiment fast neutron reactor in nuclear power equipment and designed by combining the laser collimation technique and CCD imaging technique. The measuring instrument constructs the plumb line with utilizing the characteristic of laser and making the CDD as imaging screen, and the line is regarded as the datum line in measurement and used for measuring coaxial error of the large orifices' manufacture and assemblage under plumb state. Angle resolving power is: 0.3"; displacement resolving power is: 0.02 mm; respective degree of uncertainty of measurement results are: 0.1"; 0.01 mm. The paper detailed introduces the idiographic design principle and measure method of the measuring instrument, and analyzes the measure error. It is applied to measure the precision of manufacture and the coaxial error of assemblage of the large or heavy pipe casting equipment.

  10. Adopting Standards and Measuring Accountability in Public Education.

    ERIC Educational Resources Information Center

    RRFC Links Newsletter, 1997

    1997-01-01

    This newsletter includes six articles related to the Regional Resource and Federal Centers for Special Education Network and its efforts in the area of standards and accountability. In "Teacher Training and Skills: Necessary Ingredients for Standards and Accountability," John Copenhaver discusses ways in which the Regional Resource and Federal…

  11. Determination of error measurement by means of the basic magnetization curve

    NASA Astrophysics Data System (ADS)

    Lankin, M. V.; Lankin, A. M.

    2016-04-01

    The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.

  12. A newly conceived cylinder measuring machine and methods that eliminate the spindle errors

    NASA Astrophysics Data System (ADS)

    Vissiere, A.; Nouira, H.; Damak, M.; Gibaru, O.; David, J.-M.

    2012-09-01

    Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method.

  13. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  14. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  15. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    PubMed Central

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  16. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    NASA Astrophysics Data System (ADS)

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-06-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.

  17. Reduction of positional errors in a four-point probe resistance measurement

    NASA Astrophysics Data System (ADS)

    Worledge, D. C.

    2004-03-01

    A method for reducing resistance errors due to inaccuracy in the positions of the probes in a collinear four-point probe resistance measurement of a thin film is presented. By using a linear combination of two measurements which differ by interchange of the I- and V- leads, positional errors can be eliminated to first order. Experimental data measured using microprobes show a substantial reduction in absolute error from 3.4% down to 0.01%-0.1%, and an improvement in precision by a factor of 2-4. The application of this technique to the current-in-plane tunneling method to measure electrical properties of unpatterned magnetic tunnel junction wafers is discussed.

  18. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  19. [Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].

    PubMed

    Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong

    2015-11-01

    A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.

  20. Measurement of the parallelism error of a double crystal monochromator by the pencil beam interferometer

    SciTech Connect

    Lim, Jun; Rah, Seungyu

    2005-06-15

    For the precise measurement of the parallelism error between the two crystals in a double crystal monochromator, we suggest a new method that utilizes the pencil beam interferometer. The wave front-splitting pencil beam interferometer was modified, and applied to the measurement. The method overcomes the limitations of the precedent methods that using an autocollimator. Moreover, we can measure the parallelism error continuously through the full scan range with a simple setup. Especially, it should be noted that the angular sensitivity of this method is about 0.07 arcsec rms.

  1. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    PubMed

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record PMID:27631857

  2. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major

  3. A comparison of least squares linear regression and measurement error modeling of warm/cold multipole correlation in SSC prototype dipole magnets

    SciTech Connect

    Pollock, D.; Kim, K.; Gunst, R.; Schucany, W.

    1993-05-01

    Linear estimation of cold magnetic field quality based on warm multipole measurements is being considered as a quality control method for SSC production magnet acceptance. To investigate prediction uncertainties associated with such an approach, axial-scan (Z-scan) magnetic measurements from SSC Prototype Collider Dipole Magnets (CDM`s) have been studied. This paper presents a preliminary evaluation of the explanatory ability of warm measurement multipole variation on the prediction of cold magnet multipoles. Two linear estimation methods are presented: least-squares regression, which uses the assumption of fixed independent variable (xi) observations, and the measurement error model, which includes measurement error in the xi`s. The influence of warm multipole measurement errors on predicted cold magnet multipole averages is considered. MSD QA is studying warm/cold correlation to answer several magnet quality control questions. How well do warm measurements predict cold (2kA) multipoles? Does sampling error significantly influence estimates of the linear coefficients (slope, intercept and residual standard error)? Is estimation error for the predicted cold magnet average small compared to typical variation along the Z-Axis? What fraction of the multipole RMS tolerance is accounted for by individual magnet prediction uncertainty?

  4. Effect of atmospheric radiance errors in radiometric sea-surface skin temperature measurements.

    PubMed

    Donlon, C J; Nightingale, T J

    2000-05-20

    Errors in measurements of sea-surface skin temperature (SSST) caused by inappropriate measurements of sky radiance are discussed; both model simulations and in situ data obtained in the Atlantic Ocean are used. These errors are typically caused by incorrect radiometer view geometry (pointing), temporal mismatches between the sea surface and atmospheric views, and the effect of wind on the sea surface. For clear-sky, overcast, or high-humidity atmospheric conditions, SSST is relatively insensitive (<0.1 K) to sky-pointing errors of ?10 degrees and to temporal mismatches between the sea and sky views. In mixed-cloud conditions, SSST errors greater than ?0.25 K are possible as a result either of poor radiometer pointing or of a temporal mismatch between the sea and sky views. Sea-surface emissivity also changes with sea view pointing angle. Sea view pointing errors should remain below 5 degrees for SSST errors of <0.1 K. We conclude that the clear-sky requirement of satellite infrared SSST observations means that sky-pointing errors are small when one is obtaining in situ SSST validation data at zenith angles of <40 degrees . At zenith angles greater than this, large errors are possible in high-wind-speed conditions. We recommend that high-resolution inclinometer measurements always be used, together with regular alternating sea and sky views, and that the temporal mismatch between sea and sky views be as small as possible. These results have important implications for the development of operational autonomous instruments for determining SSST for the long-term validation of satellite SSST.

  5. Discontinuity, bubbles, and translucence: major error factors in food color measurement

    NASA Astrophysics Data System (ADS)

    MacDougall, Douglas B.

    2002-06-01

    Four samples of breakfast cereals exhibiting discontinuity, two samples of baked goods with bubbles and two translucent drinks were measured to show the degree of differences that exist between their colors measured in CIELAB and their visual equivalence to the nearest NCS atlas color. Presentation variables and the contribution of light scatter to the size of the errors were examined.

  6. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  7. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1

    PubMed Central

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

  8. Uncertainty in vegetation products derived from field spectral measurements: an error budget approach

    NASA Astrophysics Data System (ADS)

    Anderson, K.; Dungan, J. L.

    2008-12-01

    vegetation. The grey panel data showed a wavelength- dependent pattern, similar to the NEdL laboratory trend, but subsequent error propagation of laboratory- derived NEdL through to a reflectance factor showed that the laboratory characterisation was unable to account for all of the uncertainty measured in the field. Therefore the estimate of u gained from field data more closely represents the reproducibility of measurements where atmospheric, solar zenith and instrument-related uncertainties are combined. Results on vegetation u showed a stronger wavelength dependency with higher standard uncertainties beyond the vegetation red-edge than in visible wavelengths (maximum = 0.015 at 800 nm, and 0.004 at 550nm). The results demonstrate that standard uncertainties of field reflectance data have a spectral dependence and exceed laboratory-derived estimates of instrument "noise". Uncertainty of this type must be taken into account when statistically testing for differences in field spectra. Improved reporting of standard uncertainties from field experiments will foster progress in remote sensing science.

  9. The Influence of Training Phase on Error of Measurement in Jump Performance.

    PubMed

    Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B

    2016-03-01

    The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.

  10. Random and systematic measurement errors in acoustic impedance as determined by the transmission line method

    NASA Technical Reports Server (NTRS)

    Parrott, T. L.; Smith, C. D.

    1977-01-01

    The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.

  11. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  12. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

  13. Uncertainties in interpretation of data from turbulent boundary layers due to measurement errors

    NASA Astrophysics Data System (ADS)

    Vinuesa, Ricardo; Nagib, Hassan

    2011-11-01

    Composite expansions based on log law and power law were used to generate synthetic velocity profiles of ZPG turbulent boundary layers in the range 800 <= Reθ <= 8 . 6 ×105 . Several artificial errors were then added to the velocity profiles to simulate dispersion in velocity measurements, error in determining probe position and uncertainty in measured skin friction. The effects of the simulated errors were studied by extracting log-law and power-law parameters from all these pseudo-experimental profiles, regardless of their original overlap region description. Various techniques were used, including the diagnostic functions (Ξ and Γ) and direct fits to logarithmic and power laws, to establish a measure of the deviations in the overlap region. The differences between extracted parameters and their expected values are compared for each case, with different magnitudes of error, to reveal when the pseudo-experimental profile leads to ambiguous conclusions; i.e., when parameters extracted for log law and power law are associated with similar levels of deviations. This ambiguity was observed up to Reθ =16,000 for a 3 % dispersion in the velocity measurements and Reθ =2,000 when the skin friction was overestimated by only 2 %. With respect to the error in the probe position, an uncertainty of 400 μm made even the highest Re profile ambiguous. The results from the present study are valid for air flow at atmospheric conditions.

  14. The partial least-squares regression analysis of impact factors of coordinate measuring machine dynamic error

    NASA Astrophysics Data System (ADS)

    Zhang, Mei; Fei, Yetai; Sheng, Li; Ma, Xiushui; Yang, Hong-tao

    2008-12-01

    The reasons why the coordinate measuring machine (CMM) dynamic error exists are complicate. And there are many elements which influence the error. So it is hard to build an accurate model. For the sake of attaining a model which not only avoided analyzing complex error sources and the interactions among them, but also solved the multiple colinearity among the variables. This paper adopted the Partial Least-Squares Regression (PLSR) to build model. The model takes 3D coordinates (X, Y, Z) and the moving velocity as the independent variable and takes the CMM dynamic error value as the dependent variable. The experimental results show that the model can be easily explained. At the same time the results show the magnitude and direction of the independent variable influencing the dependent variable.

  15. Systematic errors in the measurement of emissivity caused by directional effects.

    PubMed

    Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan

    2003-04-01

    Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use ofthe infrared 8-14-microm band.This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.

  16. Differential correction technique for removing common errors in gas filter radiometer measurements.

    PubMed

    Wallio, H A; Chan, C C; Gormsen, B B; Reichle, H G

    1992-12-20

    The Measurement of Air Pollution from Satellites (MAPS) gas filter radiometer experiment was designed to measure CO mixing ratios in the Earth's atmosphere. MAPS also measures N(2)O to provide a reference channel for the atmospheric emitting temperature and to detect the presence of clouds. In this paper we formulate equations to correct the radiometric signals based on the spatial and temporal uniformity of the N(2)O mixing ratio in the atmosphere. Results of an error study demonstrate that these equations reduce the error in inferred CO mixing ratios. Subsequent application of the technique to the MAPS 1984 data set decreases the error in the frequency distribution of mixing ratios and increases the number of usable data points.

  17. Spatial regression with covariate measurement error: A semi-parametric approach

    PubMed Central

    Huque, Md Hamidul; Bondell, Howard D.; Carroll, Raymond J.; Ryan, Louise M.

    2015-01-01

    Summary Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially-defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modelling because of the presence of spatial correlation among the observations. We propose a semi-parametric regression approach to obtain bias corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice. PMID:26788930

  18. Theoretical computation of trace gases retrieval random error from measurements of high spectral resolution infrared sounder

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.; Theriault, J. M.

    1991-01-01

    The purpose of this paper is to demonstrate the trace gas profiling capabilities of future passive high spectral resolution (1 cm(exp -1) or better) infrared (600 to 2700 cm(exp -1)) satellite tropospheric sounders. These sounders, such as the grating spectrometer, Atmospheric InfRared Sounders (AIRS) (Chahine et al., 1990) and the interferometer, GOES High Resolution Interferometer Sounder (GHIS), (Smith et al., 1991) can provide these unique infrared spectra which enable us to conduct this analysis. In this calculation only the total random retrieval error component is presented. The systematic error components contributed by the forward and inverse model error are not considered (subject of further studies). The total random errors, which are composed of null space error (vertical resolution component error) and measurement error (instrument noise component error), are computed by assuming one wavenumber spectral resolution with wavenumber span from 1100 cm(exp -1) to 2300 cm(exp -1) (the band 600 cm(exp -1) to 1100 cm(exp -1) is not used since there is no major absorption of our three gases here) and measurement noise of 0.25 degree at reference temperature of 260 degree K. Temperature, water vapor, ozone and mixing ratio profiles of nitrous oxide, carbon monoxide and methane are taken from 1976 US Standard Atmosphere conditions (a FASCODE model). Covariance matrices of the gases are 'subjectively' generated by assuming 50 percent standard deviation of gaussian perturbation with respect to their US Standard model profiles. Minimum information and maximum likelihood retrieval solutions are used.

  19. Impact of measurement error on testing genetic association with quantitative traits.

    PubMed

    Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu

    2014-01-01

    Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10(-5)) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218

  20. On the impact of covariate measurement error on spatial regression modelling

    PubMed Central

    Huque, Md Hamidul; Bondell, Howard; Ryan, Louise

    2015-01-01

    Summary Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD). PMID:25729267

  1. Quantitative analyses of spectral measurement error based on Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin

    2015-03-01

    The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.

  2. Quantifying Systematic Errors and Total Uncertainties in Satellite-based Precipitation Measurements

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2010-12-01

    Determining the uncertainties in precipitation measurements by satellite remote sensing is of fundamental importance to many applications. These uncertainties result mostly from the interplay of systematic errors and random errors. In this presentation, we will summarize our recent efforts in quantifying the error characteristics in satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMaP). For systematic errors, we devised an error decomposition to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals more error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. Our analysis reveals that the six different products share many error features. For example, they all detected strong precipitation (> 40 mm/day) well, but with various biases. They tend to over-estimate in summer and under-estimate in winter. They miss a significant amount of light precipitation (< 10 mm/day). In addition, hit biases and missed precipitation are the two leading error sources. However, their systematic errors also exhibit substantial differences, especially in winter and over rough topography, which greatly contribute to the uncertainties. To estimate the measurement uncertainties, we calculated the measurement spread from the ensemble of these six quasi-independent products. A global map of measurement uncertainties was thus produced. The map yields a global view of the error characteristics and their regional and seasonal variations, and reveals many undocumented error features over areas with no validation data available. The uncertainties are relatively small (40-60%) over the

  3. a Measuring System with AN Additional Channel for Eliminating the Dynamic Error

    NASA Astrophysics Data System (ADS)

    Dichev, Dimitar; Koev, Hristofor; Louda, Petr

    2014-03-01

    The present article views a measuring system for determining the parameters of vessels. The system has high measurement accuracy when operating in both static and dynamic mode. It is designed on a gyro-free principle for plotting a vertical. High accuracy of measurement is achieved by using a simplified design of the mechanical module as well by minimizing the instrumental error. A new solution for improving the measurement accuracy in dynamic mode is offered. The approach presented is based on a method where the dynamic error is eliminated in real time, unlike the existing measurement methods and tools where stabilization of the vertical in the inertial space is used. The results obtained from the theoretical experiments, which have been performed on the basis of the developed mathematical model, demonstrate the effectiveness of the suggested measurement approach.

  4. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  5. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    PubMed

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively. PMID:26840314

  6. Improving School Accountability Measures. NBER Working Paper Series.

    ERIC Educational Resources Information Center

    Kane, Thomas J.; Staiger, Douglas O.

    A growing number of states are using annual school-level test scores as part of their school accountability systems. This paper highlights an under-appreciated weakness of that approach, the imprecision of school-level test score means, and proposes a method for discerning signal from noise in annual school report cards. Using methods developed in…

  7. An error compensation method of laser displacement sensor in the inclined surface measurement

    NASA Astrophysics Data System (ADS)

    Li, Feng; Xiong, Zhongxing; Li, Bin

    2015-10-01

    Laser triangulation displacement sensor is an important tool in non-contact displacement measurement which has been widely used in the filed of freeform surface measurement. However, measurement accuracy of such optical sensors is very likely to be influenced by the geometrical shape and face properties of the inspected surfaces. This study presents an error compensation method for the measurement of inclined surfaces using a 1D laser displacement sensor. The effect of the incident angle on the measurement results was investigated by analyzing the laser spot projected on the inclined surface. Both the shape and the light intensity distribution of the spot will be influenced by the incident angle, which lead to the measurement error. As the beam light spot size is different at different measurement position according to Gaussian beam propagating laws, the light spot projectted on the inclinde surface will be an ellipse approximatively. It's important to note that this ellipse isn't full symmetrical because the spot size of Gaussian beam is different at different position. By analyzing the laws of the shape change, the error compensation model can be established. This method is verified through the measurement of an ceramic plane mounted on a high-accuracy 5-axis Mikron UCP 800 Duro milling center. The results show that the method is effective in increasing the measurement accuracy.

  8. Design considerations for case series models with exposure onset measurement error

    PubMed Central

    Mohammed, Sandra M.; Dalrymple, Lorien S.; Şentürk, Damla; Nguyen, Danh V.

    2014-01-01

    Summary The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared to the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. PMID:22911898

  9. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    PubMed

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror. PMID:27250374

  10. Observation of spectrum effect on the measurement of intrinsic error field on EAST

    NASA Astrophysics Data System (ADS)

    Wang, Hui-Hui; Sun, You-Wen; Qian, Jin-Ping; Shi, Tong-Hui; Shen, Biao; Gu, Shuai; Liu, Yue-Qiang; Guo, Wen-Feng; Chu, Nan; He, Kai-Yang; Jia, Man-Ni; Chen, Da-Long; Xue, Min-Min; Ren, Jie; Wang, Yong; Sheng, Zhi-Cai; Xiao, Bing-Jia; Luo, Zheng-Ping; Liu, Yong; Liu, Hai-Qing; Zhao, Hai-Lin; Zeng, Long; Gong, Xian-Zu; Liang, Yun-Feng; Wan, Bao-Nian; The EAST Team

    2016-06-01

    Intrinsic error field on EAST is measured using the ‘compass scan’ technique with different n  =  1 magnetic perturbation coil configurations in ohmically heated discharges. The intrinsic error field measured using a non-resonant dominated spectrum with even connection of the upper and lower resonant magnetic perturbation coils is of the order {{b}r2,1}/{{B}\\text{T}}≃ {{10}-5} and the toroidal phase of intrinsic error field is around {{60}{^\\circ}} . A clear difference between the results using the two coil configurations, resonant and non-resonant dominated spectra, is observed. The ‘resonant’ and ‘non-resonant’ terminology is based on vacuum modeling. The penetration thresholds of the non-resonant dominated cases are much smaller than that of the resonant cases. The difference of penetration thresholds between the resonant and non-resonant cases is reduced by plasma response modeling using the MARS-F code.

  11. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    PubMed

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  12. Accounting for error due to misclassification of exposures in case-control studies of gene-environment interaction.

    PubMed

    Zhang, Li; Mukherjee, Bhramar; Ghosh, Malay; Gruber, Stephen; Moreno, Victor

    2008-07-10

    We consider analysis of data from an unmatched case-control study design with a binary genetic factor and a binary environmental exposure when both genetic and environmental exposures could be potentially misclassified. We devise an estimation strategy that corrects for misclassification errors and also exploits the gene-environment independence assumption. The proposed corrected point estimates and confidence intervals for misclassified data reduce back to standard analytical forms as the misclassification error rates go to zero. We illustrate the methods by simulating unmatched case-control data sets under varying levels of disease-exposure association and with different degrees of misclassification. A real data set on a case-control study of colorectal cancer where a validation subsample is available for assessing genotyping error is used to illustrate our methods.

  13. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by

  14. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    NASA Astrophysics Data System (ADS)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  15. Calibrating system errors of large scale three-dimensional profile measurement instruments by subaperture stitching method.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Feng, Yunpeng; Su, Jingshi; Wu, Hengyu; Tam, Hon-Yuen

    2015-07-01

    This study presents a subaperture stitching method to calibrate system errors of several ∼2  m large scale 3D profile measurement instruments (PMIs). The calibration process was carried out by measuring a Φ460  mm standard flat sample multiple times at different sites of the PMI with a length gauge; then the subaperture data were stitched together using a sequential or simultaneous stitching algorithm that minimizes the inconsistency (i.e., difference) of the discrete data in the overlapped areas. The system error can be used to compensate the measurement results of not only large flats, but also spheres and aspheres. The feasibility of the calibration was validated by measuring a Φ1070  mm aspheric mirror, which can raise the measurement accuracy of PMIs and provide more reliable 3D surface profiles for guiding grinding, lapping, and even initial polishing processes. PMID:26193139

  16. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  17. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  18. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    PubMed

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  19. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    PubMed

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  20. Effect of sampling variation on error of rainfall variables measured by optical disdrometer

    NASA Astrophysics Data System (ADS)

    Liu, X. C.; Gao, T. C.; Liu, L.

    2012-12-01

    During the sampling process of precipitation particles by optical disdrometers, the randomness of particles and sampling variability has great impact on the accuracy of precipitation variables. Based on a marked point model of raindrop size distribution, the effect of sampling variation on drop size distribution and velocity distribution measurement using optical disdrometers are analyzed by Monte Carlo simulation. The results show that the samples number, rain rate, drop size distribution, and sampling size have different influences on the accuracy of rainfall variables. The relative errors of rainfall variables caused by sampling variation in a descending order as: water concentration, mean diameter, mass weighed mean diameter, mean volume diameter, radar reflectivity factor, and number density, which are independent with samples number basically; the relative error of rain variables are positively correlated with the margin probability, which is also positively correlated with the rain rate and the mean diameter of raindrops; the sampling size is one of the main factors that influence the margin probability, with the decreasing of sampling area, especially the decreasing of short side of sample size, the probability of margin raindrops is getting greater, hence the error of rain variables are getting greater, and the variables of median size raindrops have the maximum error. To ensure the relative error of rainfall variables measured by optical disdrometer less than 1%, the width of light beam should be at least 40 mm.

  1. Measurement error of self-reported physical activity levels in New York City: assessment and correction.

    PubMed

    Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna

    2015-05-01

    Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.

  2. Measurement Error of Self-Reported Physical Activity Levels in New York City: Assessment and Correction

    PubMed Central

    Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna

    2015-01-01

    Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010–2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors. PMID:25855646

  3. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in

  4. Theoretical analysis of errors when estimating snow distribution through point measurements

    NASA Astrophysics Data System (ADS)

    Trujillo, E.; Lehning, M.

    2015-06-01

    In recent years, marked improvements in our knowledge of the statistical properties of the spatial distribution of snow properties have been achieved thanks to improvements in measuring technologies (e.g., LIDAR, terrestrial laser scanning (TLS), and ground-penetrating radar (GPR)). Despite this, objective and quantitative frameworks for the evaluation of errors in snow measurements have been lacking. Here, we present a theoretical framework for quantitative evaluations of the uncertainty in average snow depth derived from point measurements over a profile section or an area. The error is defined as the expected value of the squared difference between the real mean of the profile/field and the sample mean from a limited number of measurements. The model is tested for one- and two-dimensional survey designs that range from a single measurement to an increasing number of regularly spaced measurements. Using high-resolution (~ 1 m) LIDAR snow depths at two locations in Colorado, we show that the sample errors follow the theoretical behavior. Furthermore, we show how the determination of the spatial location of the measurements can be reduced to an optimization problem for the case of the predefined number of measurements, or to the designation of an acceptable uncertainty level to determine the total number of regularly spaced measurements required to achieve such an error. On this basis, a series of figures are presented as an aid for snow survey design under the conditions described, and under the assumption of prior knowledge of the spatial covariance/correlation properties. With this methodology, better objective survey designs can be accomplished that are tailored to the specific applications for which the measurements are going to be used. The theoretical framework can be extended to other spatially distributed snow variables (e.g., SWE - snow water equivalent) whose statistical properties are comparable to those of snow depth.

  5. Previous estimates of mitochondrial DNA mutation level variance did not account for sampling error: comparing the mtDNA genetic bottleneck in mice and humans.

    PubMed

    Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C

    2010-04-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.

  6. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  7. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    ERIC Educational Resources Information Center

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  8. A Study on Sixth Grade Students' Misconceptions and Errors in Spatial Measurement: Length, Area, and Volume

    ERIC Educational Resources Information Center

    Tan Sisman, Gulcin; Aksu, Meral

    2016-01-01

    The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…

  9. Sensitivity of Force Specifications to the Errors in Measuring the Interface Force

    NASA Technical Reports Server (NTRS)

    Worth, Daniel

    2000-01-01

    Force-Limited Random Vibration Testing has been applied in the last several years at the NASA Goddard Space Flight Center (GSFC) and other NASA centers for various programs at the instrument and spacecraft level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the flight environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. This paper will show the effects of some measurement and calibration errors in force gauges. In some cases, the notches in the acceleration spectrum when a random vibration test is performed with measurement errors are the same as the notches produced during a test that has no measurement errors. The paper will also present the results Of tests that were used to validate this effect. Knowing the effect of measurement errors can allow tests to continue after force gauge failures or allow dummy gauges to be used in places that are inaccessible to a force gage.

  10. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  11. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  12. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  13. Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2011-01-01

    This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…

  14. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  15. The Relationship between Mean Square Differences and Standard Error of Measurement: Comment on Barchard (2012)

    ERIC Educational Resources Information Center

    Pan, Tianshu; Yin, Yue

    2012-01-01

    In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…

  16. Spatial accounting for errors in LiDAR-derived products: Snow volume and snow water equivalent estimation

    NASA Astrophysics Data System (ADS)

    Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.

    2011-12-01

    Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.

  17. Tests of a dynamic systems account of the A-not-B error: the influence of prior experience on the spatial memory abilities of two-year-olds.

    PubMed

    Spencer, J P; Smith, L B; Thelen, E

    2001-01-01

    Recently, Smith, Thelen, and colleagues proposed a dynamic systems account of the Piagetian "A-not-B" error in which infants' errors result from general processes that make goal-directed actions to remembered locations. Based on this account, the A-not-B error should be a general phenomenon, observable in different tasks and at different points in development. Smith, Thelen, et al.'s proposal was tested using an A-not-B version of a sandbox task. During three training trials and three "A" trials, 2-year-olds watched as a toy was buried in a sandbox at Location A. Following a 10-s delay, children searched for the object. Across five experiments, children's (total N = 92) performance on the A trials was accurate. After the A trials, children watched as a toy was hidden at Location B, 8 to 10 inches from Location A. In all experiments, children's searches after a 10-s delay were significantly biased in the direction of Location A. Furthermore, this bias toward Location A decreased with repeated trials to Location B, as well as when children completed fewer trials to Location A. Together, these data suggest that A-not-B-type errors are pervasive across tasks and development.

  18. Regularization methods used in error analysis of solar particle spectra measured on SOHO/EPHIN

    NASA Astrophysics Data System (ADS)

    Kharytonov, A.; Böhm, E.; Wimmer-Schweingruber, R. F.

    2009-02-01

    Context: The telescope EPHIN (Electron, Proton, Helium INstrument) on the SOHO (SOlar and Heliospheric Observatory) spacecraft measures the energy deposit of solar particles passing through the detector system. The original energy spectrum of solar particles is obtained by regularization methods from EPHIN measurements. It is important not only to obtain the solution of this inverse problem but also to estimate errors or uncertainties of the solution. Aims: The focus of this paper is to evaluate the influence of errors or noise in the instrument response function (IRF) and in the measurements when calculating energy spectra in space-based observations by regularization methods. Methods: The basis of solar particle spectra calculation is the Fredholm integral equation with the instrument response function as the kernel that is obtained by the Monte Carlo technique in matrix form. The original integral equation reduces to a singular system of linear algebraic equations. The nonnegative solution is obtained by optimization with constraints. For the starting value we use the solution of the algebraic problem that is calculated by regularization methods such as the singular value decomposition (SVD) or the Tikhonov methods. We estimate the local errors from special algebraic and statistical equations that are considered as direct or inverse problems. Inverse problems for the evaluation of errors are solved by regularization methods. Results: This inverse approach with error analysis is applied to data from the solar particle event observed by SOHO/EPHIN on day 1996/191. We find that the various methods have different strengths and weaknesses in the treatment of statistical and systematic errors.

  19. Error analysis for the ground-based microwave ozone measurements during STOIC

    NASA Astrophysics Data System (ADS)

    Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

    1995-05-01

    We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ("baseline"). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17 km at 0.2 mbar. We show the "blind" microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE II. The STOIC results and comparisons are broadly consistent with the formal analysis.

  20. Improved error separation technique for on-machine optical lens measurement

    NASA Astrophysics Data System (ADS)

    Fu, Xingyu; Bing, Guo; Zhao, Qingliang; Rao, Zhimin; Cheng, Kai; Mulenga, Kabwe

    2016-04-01

    This paper describes an improved error separation technique (EST) for on-machine surface profile measurement which can be applied to optical lenses on precision and ultra-precision machine tools. With only one precise probe and a linear stage, improved EST not only reduces measurement costs, but also shortens the sampling interval, which implies that this method can be used to measure the profile of small-bore lenses. The improved EST with stitching method can be applied to measure the profile of high-height lenses as well. Since the improvement is simple, most of the traditional EST can be modified by this method. The theoretical analysis and experimental results in this paper show that the improved EST eliminates the slide error successfully and generates an accurate lens profile.

  1. Error analysis for the ground-based microwave ozone measurements during STOIC

    NASA Technical Reports Server (NTRS)

    Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

    1995-01-01

    We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.

  2. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    ERIC Educational Resources Information Center

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  3. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    SciTech Connect

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  4. Error analysis and optimization design of wide spectrum AOTF's optical performance parameter measuring system

    NASA Astrophysics Data System (ADS)

    Qin, Xiage; He, Zhiping; Xu, Rui; Wu, Yu; Shu, Rong

    2015-10-01

    As a new type of light dispersion device, Acousto-Optic Tunable Filter (AOTF) based on the acousto-optic interaction principle which can achieve diffractive spectral, has rapidly developed and been widely used in the technical fields of spectral analysis and remote sensing detection since it launched. The precise measurement of AOTF's optical performance parameter is the precondition to ensure spectral radiometric calibration and data inversion in the process of quantitation for spectrometer based on AOTF. In this paper, a kind of AOTF performance analysis system in 450~3200nm wide spectrum was introduced, including the fundamental principle of the basic system and the test method of the key optical parameters of AOTF. The error sources and the influence of the magnitude of the error in the whole test system were analyzed and verified emphatically. The numerical simulation of the noise in detecting circuit and the instability of light source was carried out, and based on the simulation result, the method for improving the measuring accuracy of the system were proposed such as improving light source parameters, correcting and changing test method by using dual light path detecting, etc. Experimental results indicate that: the relative error can be reduced by 20%, and the stability of the test signal is better than 98%. Finally, this error analysis model and the potential applicability in other optoelectronic measuring system were also discussed in the paper.

  5. Skin movement errors in measurement of sagittal lumbar and hip angles in young and elderly subjects.

    PubMed

    Kuo, Yi-Liang; Tully, Elizabeth A; Galea, Mary P

    2008-02-01

    Errors in measurement of sagittal lumbar and hip angles due to skin movement on the pelvis and/or lateral thigh were measured in young (n = 21, age = 18.6 +/- 2.1 years) and older (n = 23, age = 70.9 +/- 6.4 years) age groups. Skin reference markers were attached over specific landmarks of healthy young and elderly subjects, who were videotaped in three static positions of hip flexion using the 2D PEAK Motus video analysis system. Sagittal lumbar and hip angles were calculated from skin reference markers and manually palpated landmarks. The elderly subjects demonstrated greater errors in lumbar angle due to skin movement on the pelvis only in the maximal hip flexion position. The traditional model (ASIS-PSIS-GT-LFE) underestimated sagittal hip angle and the revised model (ASIS-PSIS-2/3Th-1/4Th) provided more accurate measurement of sagittal hip angle throughout the full available range of hip flexion. Skin movement on the pelvis had a small counterbalancing effect on the larger errors from lateral thigh markers (GT-LFE), thereby decreasing hip angle error.

  6. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    SciTech Connect

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  7. Measurement error in two-stage analyses, with application to air pollution epidemiology.

    PubMed

    Szpiro, Adam A; Paciorek, Christopher J

    2013-12-01

    Public health researchers often estimate health effects of exposures (e.g., pollution, diet, lifestyle) that cannot be directly measured for study subjects. A common strategy in environmental epidemiology is to use a first-stage (exposure) model to estimate the exposure based on covariates and/or spatio-temporal proximity and to use predictions from the exposure model as the covariate of interest in the second-stage (health) model. This induces a complex form of measurement error. We propose an analytical framework and methodology that is robust to misspecification of the first-stage model and provides valid inference for the second-stage model parameter of interest. We decompose the measurement error into components analogous to classical and Berkson error and characterize properties of the estimator in the second-stage model if the first-stage model predictions are plugged in without correction. Specifically, we derive conditions for compatibility between the first- and second-stage models that guarantee consistency (and have direct and important real-world design implications), and we derive an asymptotic estimate of finite-sample bias when the compatibility conditions are satisfied. We propose a methodology that (1) corrects for finite-sample bias and (2) correctly estimates standard errors. We demonstrate the utility of our methodology in simulations and an example from air pollution epidemiology.

  8. Measurement and simulation of clock errors from resource-constrained embedded systems

    NASA Astrophysics Data System (ADS)

    Collett, M. A.; Matthews, C. E.; Esward, T. J.; Whibberley, P. B.

    2010-07-01

    Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10-6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range.

  9. Analysis of vibration induced error in turbulence velocity measurements from an aircraft wing tip boom

    NASA Technical Reports Server (NTRS)

    Akkari, S. H.; Frost, W.

    1982-01-01

    The effect of rolling motion of a wing on the magnitude of error induced due to the wing vibration when measuring atmospheric turbulence with a wind probe mounted on the wing tip was investigated. The wing considered had characteristics similar to that of a B-57 Cambera aircraft, and Von Karman's cross spectrum function was used to estimate the cross-correlation of atmospheric turbulence. Although the error calculated was found to be less than that calculated when only elastic bendings and vertical motions of the wing are considered, it is still relatively large in the frequency's range close to the natural frequencies of the wing. Therefore, it is concluded that accelerometers mounted on the wing tip are needed to correct for this error, or the atmospheric velocity data must be appropriately filtered.

  10. An Assessment of Errors and Their Reduction in Terrestrial Laser Scanner Measurements in Marmorean Surfaces

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, Jorge

    2016-03-01

    The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.

  11. Some comments on misspecification of priors in Bayesian modelling of measurement error problems.

    PubMed

    Richardson, S; Leblond, L

    In this paper we discuss some aspects of misspecification of prior distributions in the context of Bayesian modelling of measurement error problems. A Bayesian approach to the treatment of common measurement error situations encountered in epidemiology has been recently proposed. Its implementation involves, first, the structural specification, through conditional independence relationships, of three submodels-a measurement model, an exposure model and a disease model- and secondly, the choice of functional forms for the distributions involved in the submodels. We present some results indicating how the estimation of the regression parameters of interest, which is carried out using Gibbs sampling, can be influenced by a misspecification of the parametric shape of the prior distribution of exposure. PMID:9004392

  12. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.

    2016-05-04

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  13. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  14. Evaluating measurement error in readings of blood pressure for adolescents and young adults.

    PubMed

    Bauldry, Shawn; Bollen, Kenneth A; Adair, Linda S

    2015-04-01

    Readings of blood pressure are known to be subject to measurement error, but the optimal method for combining multiple readings is unknown. This study assesses different sources of measurement error in blood pressure readings and assesses methods for combining multiple readings using data from a sample of adolescents/young adults who were part of a longitudinal epidemiological study based in Cebu, Philippines. Three sets of blood pressure readings were collected at 2-year intervals for 2127 adolescents and young adults as part of the Cebu National Longitudinal Health and Nutrition Study. Multi-trait, multi-method (MTMM) structural equation models in different groups were used to decompose measurement error in the blood pressure readings into systematic and random components and to examine patterns in the measurement across males and females and over time. The results reveal differences in the measurement properties of blood pressure readings by sex and over time that suggest the combination of multiple readings should be handled separately for these groups at different time points. The results indicate that an average (mean) of the blood pressure readings has high validity relative to a more complicated factor-score-based linear combination of the readings. PMID:25548966

  15. Influence of sky radiance measurement errors on inversion-retrieved aerosol properties

    SciTech Connect

    Torres, B.; Toledano, C.; Cachorro, V. E.; Bennouna, Y. S.; Fuertes, D.; Gonzalez, R.; Frutos, A. M. de; Berjon, A. J.; Dubovik, O.; Goloub, P.; Podvin, T.; Blarel, L.

    2013-05-10

    Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90's, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2 Degree-Sign in pointing and 1.2 Degree-Sign field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.

  16. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements.

    PubMed

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  17. SANG-a kernel density estimator incorporating information about the measurement error

    NASA Astrophysics Data System (ADS)

    Hayes, Robert

    Analyzing nominally large data sets having a measurement error unique to each entry is evaluated with a novel technique. This work begins with a review of modern analytical methodologies such as histograming data, ANOVA, regression (weighted and unweighted) along with various error propagation and estimation techniques. It is shown that by assuming the errors obey a functional distribution (such as normal or Poisson), a superposition of the assumed forms then provides the most comprehensive and informative graphical depiction of the data set's statistical information. The resultant approach is evaluated only for normally distributed errors so that the method is effectively a Superposition Analysis of Normalized Gaussians (SANG). SANG is shown to be easily calculated and highly informative in a single graph from what would otherwise require multiple analysis and figures to accomplish the same result. The work is demonstrated using historical radiochemistry measurements from a transuranic waste geological repository's environmental monitoring program. This work paid for under NRC-HQ-84-14-G-0059.

  18. Accountability in Higher Education: Are There "Fatal Errors" Embedded in Current U.S. Policy Affecting Higher Education?

    ERIC Educational Resources Information Center

    Grantham, Marilyn H.

    Some observers of political phenomena are referring to the 1990s as the "age of accountability." Early in the decade of the '90s, articles in periodicals, professional journals and other sources were voicing warnings about increasing public policymaker frustration with higher education and the spreading development and implementation of…

  19. Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude

    NASA Astrophysics Data System (ADS)

    Liu, Bingyi; Feng, Changzhong; Liu, Zhishen

    2014-11-01

    For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.

  20. An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis

    NASA Technical Reports Server (NTRS)

    Wenger, David Paul

    1991-01-01

    The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.

  1. Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.

    PubMed

    Tang, Jaw-Luen; Wang, Jian-Neng

    2010-01-01

    A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.

  2. Correction for dynamic bias error in transmission measurements of void fraction

    NASA Astrophysics Data System (ADS)

    Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.

    2012-12-01

    Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.

  3. What is the measure of a safe hospital? Medication errors missed by risk management, clinical staff, and surveyors.

    PubMed

    Grasso, Benjamin C; Rothschild, Jeffrey M; Jordan, Constance W; Jayaram, Geetha

    2005-07-01

    Research in the last decade has identified medication errors as a more frequent cause of unintended harm than was previously thought. Inpatient medication errors and error-prone medication usage are detected internally by medication error reporting and externally through hospital licensing and accreditation surveys. A hospital's rate of medication errors is one of several measures of patient safety available to staff. However, prospective patients and other interested parties must rely upon licensing and accreditation scores, along with varying access to outcome data, as their sole measures of patient safety. We have previously reported that much higher rates of medication errors were found when an independent audit was used compared with rates determined by the usual process of self-report. In this study, we summarize these earlier findings and then compare the error detection sensitivity of licensing and accreditation surveys with that of an independent audit. When experienced surveyors fail to detect a highly error prone medication usage system, it raises questions about the validity of survey scores as a measure of safety (i.e., lack of medication errors). Replication of our findings in other hospital settings is needed. We also recommend measures for improving patient safety by reducing error rates and increasing error detection. PMID:16041238

  4. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  5. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  6. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  7. 50 CFR 648.262 - Accountability measures for red crab limited access vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...

  8. Errors in using two dimensional methods to measure motion about an offset revolute

    SciTech Connect

    Hollerbach, K.; Hollister, A.

    1996-03-01

    2D measurement of human joint motion involves analysis of 3D displacements in an observer selected measurement plane. Accurate marker placement and alignment of joint motion plane with the observer plane are difficult. Alignment of the two planes is essential for accurate recording and understanding of the joint mechanism and the movement about it. In nature, joint axes can exist at any orientation and location relative to a global reference frame. An aritrary axis is any axis that is not coincident with a reference coordinate. We calculate the errors resulting from measuring joint motion about an arbitrary axis using 2D methods.

  9. Sources of resonance-related errors in capacitance versus voltage measurement systems

    NASA Astrophysics Data System (ADS)

    Polishchuk, Igor; Brown, George; Huff, Howard

    2000-10-01

    A frequency dependence of the capacitance of metal-oxide-semiconductor devices is often observed in wafer-level probe station measurements for frequencies exceeding 100 kHz. It is well established, however, that the true capacitance value in the SiO2 devices biased into accumulation should remain frequency-independent well into the gigahertz range. Consequently, the apparent frequency dependence of the capacitance versus voltage characteristic may be the result of a resonance present in the measurement setup. We present a quantitative analysis, which can be used to identify the sources of error, characterize a measurement system, and improve the precision of the collected data.

  10. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  11. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-03-29

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia.

  12. A State Perspective on Multiple Measures in School Accountability.

    ERIC Educational Resources Information Center

    Schafer, William D.

    Multiple measures may mean multiple opportunities to show achievement or the use of multiple assessment formats. A third meaning is the use of assessments from different sources, such as augmenting an external, usually commercial assessment with a state's own assessment. The first two meanings of multiple assessments have been explored…

  13. 50 CFR 640.28 - Annual catch limits (ACLs) and accountability measures (AMs).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE SPINY LOBSTER FISHERY OF THE GULF... accountability measures (AMs). For recreational and commercial spiny lobster landings combined, the ACL is...

  14. Using E-Z Reader to examine the consequences of fixation-location measurement error.

    PubMed

    Reichle, Erik D; Drieghe, Denis

    2015-01-01

    There is an ongoing debate about whether fixation durations during reading are only influenced by the processing difficulty of the words being fixated (i.e., the serial-attention hypothesis) or whether they are also influenced by the processing difficulty of the previous and/or upcoming words (i.e., the attention-gradient hypothesis). This article reports the results of 3 simulations that examine how systematic and random errors in the measurement of fixation locations can generate 2 phenomena that support the attention-gradient hypothesis: parafoveal-on-foveal effects and large spillover effects. These simulations demonstrate how measurement error can produce these effects within the context of a computational model of eye-movement control during reading (E-Z Reader; Reichle, 2011) that instantiates strictly serial allocation of attention, thus demonstrating that these effects do not necessarily provide strong evidence against the serial-attention hypothesis.

  15. Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators

    NASA Astrophysics Data System (ADS)

    Flowers, Paul A.

    1997-07-01

    Sophomore analytical chemistry courses typically devote a substantial amount of lecture time to acid/base equilibrium theory, and usually include at least one laboratory project employing potentiometric titrations. In an effort to provide students a laboratory experience that more directly supports their classroom discussions on this important topic, an experiment involving potentiometric measurement of transition ranges and titration errors for common acid/base indicators has been developed. The pH and visually-assessed color of a millimolar strong acid/base system are monitored as a function of added titrant volume, and the resultant data plotted to permit determination of the indicator's transition range and associated titration error. Student response is typically quite positive, and the measured quantities correlate reasonably well to literature values.

  16. Measurement error analysis of the 3D four-wheel aligner

    NASA Astrophysics Data System (ADS)

    Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun

    2013-10-01

    Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.

  17. A novel method for measuring transit tilt error in laser trackers

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhou, Weihu; Zhu, Han; Lin, Xinlong

    2015-02-01

    A novel method was proposed to measure the tilt error between the transit axis and standing axis of the laser tracker. A gradienter was first used to make the standing axis of the laser tracker perpendicular to the horizontal plane. The laser beam of the tracker was then projected onto a vertical plane set at a certain distance from the tracker with equal horizontal angles and diverse vertical angles in two-face mode. The contrail of the laser beam was recorded while the simulation was manipulated to estimate the beam trail under the same circumstance. The tilt error was thus obtained according to the comparison of the actual result against the simulated one. Experimental results showed that the accuracy of the tilt measuring method could meet the user's demand.

  18. Magnetic field error measurement of the CEBAF (NIST) wiggler using the pulsed wire method

    SciTech Connect

    Wallace, Stephen; Colson, William; Neil, George; Harwood, Leigh

    1993-07-01

    The National Institute for Science and Technology (NIST) wiggler has been loaded to the Continuous Electron Beam Accelerator Facility (CEBAF). The pulsed wire method [R.W. Warren, Nucl. Instr. and Meth. A272 (1988) 267] has been used to measure the field errors of the entrance wiggler half, and the net path deflection was calculated to be Δx ≈ 5.2 m.

  19. Error-control and processes optimization of (223/224)Ra measurement using Delayed Coincidence Counter (RaDeCC).

    PubMed

    Xiaoqing, Cheng; Lixin, Yi; Lingling, Liu; Guoqiang, Tang; Zhidong, Wang

    2015-11-01

    RaDeCC has proved to be a precise and standard way to measure (224)Ra and (223)Ra in water samples and successfully made radium a tracer of several environmental processes. In this paper, the relative errors of (224)Ra and (223)Ra measurement in water samples via a Radium Delayed Coincidence Count system are analyzed through performing coincidence correction calculations and error propagation. The calculated relative errors range of 2.6% ∼ 10.6% for (224)Ra and 9.6% ∼ 14.2% for (223)Ra. For different radium activities, effects of decay days and counting time on final radium relative errors are evaluated and the results show that these relative errors can decrease by adjusting the two measurement factors. Finally, to minimize propagated errors in Radium activity, a set of optimized RaDeCC measurement parameters are proposed.

  20. Error-control and processes optimization of (223/224)Ra measurement using Delayed Coincidence Counter (RaDeCC).

    PubMed

    Xiaoqing, Cheng; Lixin, Yi; Lingling, Liu; Guoqiang, Tang; Zhidong, Wang

    2015-11-01

    RaDeCC has proved to be a precise and standard way to measure (224)Ra and (223)Ra in water samples and successfully made radium a tracer of several environmental processes. In this paper, the relative errors of (224)Ra and (223)Ra measurement in water samples via a Radium Delayed Coincidence Count system are analyzed through performing coincidence correction calculations and error propagation. The calculated relative errors range of 2.6% ∼ 10.6% for (224)Ra and 9.6% ∼ 14.2% for (223)Ra. For different radium activities, effects of decay days and counting time on final radium relative errors are evaluated and the results show that these relative errors can decrease by adjusting the two measurement factors. Finally, to minimize propagated errors in Radium activity, a set of optimized RaDeCC measurement parameters are proposed. PMID:26233651