Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2014-04-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2013-10-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
Pike, D.H.; Morrison, G.W.; Downing, D.J.
1982-04-01
It has been shown in previous work that the Kalman Filter and Linear Smoother produces optimal estimates of inventory and loss from a material balance area. The assumptions of the Kalman Filter/Linear Smoother approach assume no correlation between inventory measurement error nor does it allow for serial correlation in these measurement errors. The purpose of this report is to extend the previous results by relaxing these assumptions to allow for correlation of measurement errors. The results show how to account for correlated measurement errors in the linear system model of the Kalman Filter/Linear Smoother. An algorithm is also included for calculating the required error covariance matrices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2010 CFR
2010-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2012 CFR
2012-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2014 CFR
2014-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2013 CFR
2013-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2011 CFR
2011-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Season Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance...
40 CFR 60.4156 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking...
40 CFR 60.4156 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
Tenan, Matthew S.
2016-01-01
Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device. PMID:27242546
Tenan, Matthew S
2016-01-01
Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device. PMID:27242546
40 CFR 97.427 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.427 Section 97.427 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Annual Trading Program §...
40 CFR 97.427 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.427 Section 97.427 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Annual Trading Program §...
40 CFR 97.427 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.427 Section 97.427 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Annual Trading Program §...
40 CFR 97.527 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.527 Section 97.527 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program §...
40 CFR 97.527 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.527 Section 97.527 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program §...
40 CFR 97.527 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.527 Section 97.527 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program §...
NASA Astrophysics Data System (ADS)
Henderson, Robert K.
1999-12-01
It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.
Measuring Test Measurement Error: A General Approach
ERIC Educational Resources Information Center
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2013-01-01
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025
Measurement Errors in Organizational Surveys.
ERIC Educational Resources Information Center
Dutka, Solomon; Frankel, Lester R.
1993-01-01
Describes three classes of measurement techniques: (1) interviewing methods; (2) record retrieval procedures; and (3) observation methods. Discusses primary reasons for measurement error. Concludes that, although measurement error can be defined and controlled for, there are other design factors that also must be considered. (CFR)
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
Evaluation of accountability measurements
Cacic, C.G.
1988-01-01
The New Brunswick Laboratory (NBL) is programmatically responsible to the U.S. Department of Energy (DOE) Office of Safeguards and Security (OSS) for providing independent review and evaluation of accountability measurement technology in DOE nuclear facilities. This function is addressed in part through the NBL Safegaurds Measurement Evaluation (SME) Program. The SME Program utilizes both on-site review of measurement methods along with material-specific measurement evaluation studies to provide information concerning the adequacy of subject accountability measurements. This paper reviews SME Program activities for the 1986-87 time period, with emphasis on noted improvements in measurement capabilities. Continued evolution of the SME Program to respond to changing safeguards concerns is discussed.
Measurements and material accounting
Hammond, G.A. )
1989-11-01
The DOE role for the NBL in safeguarding nuclear material into the 21st century is discussed. Development of measurement technology and reference materials supporting requirements of SDI, SIS, AVLIS, pyrochemical reprocessing, fusion, waste storage, plant modernization program, and improved tritium accounting are some of the suggested examples.
Pendulum Shifts, Context, Error, and Personal Accountability
Harold Blackman; Oren Hester
2011-09-01
This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Performance testing accountability measurements
Oldham, R.D.; Mitchell, W.G.; Spaletto, M.I.
1993-12-31
The New Brunswick Laboratory (NBL) provides assessment support to the DOE Operations Offices in the area of Material Control and Accountability (MC and A). During surveys of facilities, the Operations Offices have begun to request from NBL either assistance in providing materials for performance testing of accountability measurements or both materials and personnel to do performance testing. To meet these needs, NBL has developed measurement and measurement control performance test procedures and materials. The present NBL repertoire of performance tests include the following: (1) mass measurement performance testing procedures using calibrated and traceable test weights, (2) uranium elemental concentration (assay) measurement performance tests which use ampulated solutions of normal uranyl nitrate containing approximately 7 milligrams of uranium per gram of solution, and (3) uranium isotopic measurement performance tests which use ampulated uranyl nitrate solutions with enrichments ranging from 4% to 90% U-235. The preparation, characterization, and packaging of the uranium isotopic and assay performance test materials were done in cooperation with the NBL Safeguards Measurements Evaluation Program since these materials can be used for both purposes.
Accountability Measures Report, 2007
ERIC Educational Resources Information Center
North Dakota University System, 2007
2007-01-01
This document is a tool for demonstrating that the University System is meeting the "flexibility with accountability" expectations of SB 2003 passed by the 2001 Legislative Assembly. The 2007 report reflects some of the many ways North Dakota University System (NDUS) colleges and universities are developing the human capital needed to create a…
Accountability Measures Report, 2006
ERIC Educational Resources Information Center
North Dakota University System, 2006
2006-01-01
This document is a valuable tool for demonstrating that the University System is meeting the "flexibility with accountability" expectations of SB 2003 passed by the 2001 Legislative Assembly. The 2006 report reflects some of the many ways North Dakota University System (NDUS) colleges and universities are developing the human capital needed to…
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-06-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-04-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
Accounting for Errors in Model Analysis Theory: A Numerical Approach
NASA Astrophysics Data System (ADS)
Sommer, Steven R.; Lindell, Rebecca S.
2004-09-01
By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Rapid mapping of volumetric machine errors using distance measurements
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
Conditional Standard Error of Measurement in Prediction.
ERIC Educational Resources Information Center
Woodruff, David
1990-01-01
A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…
Minimizing noise-temperature measurement errors
NASA Technical Reports Server (NTRS)
Stelzried, C. T.
1992-01-01
An analysis of noise-temperature measurement errors of low-noise amplifiers was performed. Results of this analysis can be used to optimize measurement schemes for minimum errors. For the cases evaluated, the effective noise temperature (Te) of a Ka-band maser can be measured most accurately by switching between an ambient and a 2-K cooled load without an isolation attenuator. A measurement accuracy of 0.3 K was obtained for this example.
Performance measurement: the new accountability.
Martin, L L; Kettner, P M
1997-01-01
Over the years, "accountability" in the human services has focused upon issues such as the legal framework, organizational management, financial responsibility, political concerns, and client inputs and expectations. Within the past decade, the meaning of "accountability" has been extended to the more dynamic organizational functions of "efficiency" and " effectiveness." Efficiency and effectiveness increasingly must be put to the tests of performance measurement and outcome evaluation. Forces outside the social work profession, including, among others, federal expectations and initiatives and the increased implementation of the concept of managed care, will ensure that efficiency and effectiveness will be central and highlighted concerns far into the future. This "new accountability" is demanded by the stakeholders in the nonprofit sector and by federal requirements built into the planning, funding, and implementation processes for nonprofits and for-profits alike. PMID:10166757
Read, Randy J; McCoy, Airlie J
2016-03-01
The crystallographic diffraction experiment measures Bragg intensities; crystallographic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978), Acta Cryst. A35, 517-525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124
Read, Randy J.; McCoy, Airlie J.
2016-01-01
The crystallographic diffraction experiment measures Bragg intensities; crystallographic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978 ▸), Acta Cryst. A35, 517–525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124
Protecting weak measurements against systematic errors
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-07-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
Reducing Measurement Error in Student Achievement Estimation
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2008-01-01
The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…
Measurement error analysis of taxi meter
NASA Astrophysics Data System (ADS)
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
38 CFR 2.7 - Delegation of authority to provide relief on account of administrative error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... to provide relief on account of administrative error. 2.7 Section 2.7 Pensions, Bonuses, and Veterans... relief on account of administrative error. (a) Section 503(a) of title 38 U.S.C., provides that if the... by reason of administrative error on the part of the Federal Government or any of its employees,...
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
Neutron multiplication error in TRU waste measurements
Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B
2015-09-18
There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. PMID:26296855
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J
2014-11-10
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Algorithmic Error Correction of Impedance Measuring Sensors
Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira
2009-01-01
This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177
Sources of Error in UV Radiation Measurements
Larason, Thomas C.; Cromer, Christopher L.
2001-01-01
Increasing commercial, scientific, and technical applications involving ultraviolet (UV) radiation have led to the demand for improved understanding of the performance of instrumentation used to measure this radiation. There has been an effort by manufacturers of UV measuring devices (meters) to produce simple, optically filtered sensor systems to accomplish the varied measurement needs. We address common sources of measurement errors using these meters. The uncertainty in the calibration of the instrument depends on the response of the UV meter to the spectrum of the sources used and its similarity to the spectrum of the quantity to be measured. In addition, large errors can occur due to out-of-band, non-linear, and non-ideal geometric or spatial response of the UV meters. Finally, in many applications, how well the response of the UV meter approximates the presumed action spectrum needs to be understood for optimal use of the meters.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. PMID:27566773
Risk, Error and Accountability: Improving the Practice of School Leaders
ERIC Educational Resources Information Center
Perry, Lee-Anne
2006-01-01
This paper seeks to explore the notion of risk as an organisational logic within schools, the impact of contemporary accountability regimes on managing risk and then, in turn, to posit a systems-based process of risk management underpinned by a positive logic of risk. It moves through a number of steps beginning with the development of an…
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Generalized Geometric Error Correction in Coordinate Measurement
NASA Astrophysics Data System (ADS)
Hermann, Gyula
Software compensation of geometric errors in coordinate measuring is hot subject because it results the decrease of manufacturing costs. The paper gives a summary of the results and achievements of earlier works on the subject. In order to improve these results a method is adapted to capture simultaneously the new coordinate frames in order use exact transformation values at discrete points of the measuring volume. The interpolation techniques published in the literature have the draw back that they could not maintain the orthogonality of the rotational part of the transformation matrices. The paper gives a technique, based on quaternions, which avoid this problem and leads to better results.
Non-Gaussian error distribution of 7Li abundance measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Houston, Stephen; Ratra, Bharat
2015-07-01
We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
Bayesian conformity assessment in presence of systematic measurement errors
NASA Astrophysics Data System (ADS)
Carobbi, Carlo; Pennecchi, Francesca
2016-04-01
Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.
Laser measurement and analysis of reposition error in polishing systems
NASA Astrophysics Data System (ADS)
Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying
2015-10-01
In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
[Therapeutic errors and dose measuring devices].
García-Tornel, S; Torrent, M L; Sentís, J; Estella, G; Estruch, M A
1982-06-01
In order to investigate the possibilities of therapeutical error in syrups administration, authors have measured the capacity of 158 home spoons (x +/- SD). They classified spoons in four groups: group I (table spoons), 49 units (11.65 +/- 2.10 cc); group II (tea spoons), 41 units (4.70+/-1.04 cc); group III (coffee spoons), 41 units (2.60 +/- 0.59 cc), and group IV (miscellaneous), 27 units. They have compared the first three groups with theoreticals values of 15, 5 and 2.5 cc, respectively, ensuring, in the first group, significant statistical differences. In this way, they analyzed information that paediatricians receive from "vademecums", which they usually consult and have studied two points: If syrup has a meter or not, and if it indicates drug concentration or not. Only a 18% of the syrups have a meter and about 88% of the drugs indicate their concentration (mg/cc). They conclude that to prevent errors of dosage, the pharmacological industry must include meters in their products. If they haven't the safest thing is to use syringes. PMID:7125401
Inter-tester Agreement in Refractive Error Measurements
Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang
2014-01-01
Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter
NASA Astrophysics Data System (ADS)
Zhao, Xiaolong; Yang, Li
2015-10-01
Based on the theory of infrared radiation and of the infrared thermography, the mathematical correction model of the infrared radiation temperature measurement of semitransparent object is developed taking account by the effects of the atmosphere, surroundings, radiation of transmissivity and many other factors. The effects of the emissivity, transmissivity and measurement error are analysed on temperature measurement error of the infrared thermography. The measurement error of semitransparent object are compared with that of opaque object. The countermeasures to reduce the measurement error are also discussed.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545
Reducing Errors by Use of Redundancy in Gravity Measurements
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Improving Accountability through Expanded Measures of Performance
ERIC Educational Resources Information Center
Hamilton, Laura S.; Schwartz, Heather L.; Stecher, Brian M.; Steele, Jennifer L.
2013-01-01
Purpose: The purpose of this paper is to examine how test-based accountability has influenced school and district practices and explore how states and districts might consider creating expanded systems of measures to address the shortcomings of traditional accountability. It provides research-based guidance for entities that are developing or…
Measurement control administration for nuclear materials accountability
Rudy, C.R.
1991-01-31
In 1986 a measurement control program was instituted at Mound to ensure that measurement performance used for nuclear material accountability was properly monitored and documented. The organization and management of various aspects of the program are discussed. Accurate measurements are the basis of nuclear material accountability. The validity of the accountability values depends on the measurement results that are used to determine inventories, receipts, and shipments. With this measurement information, material balances are calculated to determine losses and gains of materials during a specific time period. Calculation of Inventory Differences (ID) are based on chemical or physical measurements of many items. The validity of each term is dependent on the component measurements. Thus, in Figure 1, the measured element weight of 17 g is dependent on the performance of the particular measurement system that was used. In this case, the measurement is performed using a passive gamma ray method with a calibration curve determined by measuring representative standards containing a range of special nuclear materials (Figure 2). One objective of a measurement control program is to monitor and verify the validity of the calibration curve (Figure 3). In 1986 Mound's Nuclear Materials Accountability (NMA) group instituted a formal measurement control program to ensure the validity of the numbers that comprise this equation and provide a measure of how well bulk materials can be controlled. Most measurements used for accountability are production measurements with their own quality assurance programs. In many cases a measurement control system is planned and maintained by the developers and operators of the particular measurement system with oversight by the management responsible for the results. 4 refs., 7 figs.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Measurement Validity and Accountability for Student Learning
ERIC Educational Resources Information Center
Borden, Victor M. H.; Young, John W.
2008-01-01
In this chapter, the authors focus on issues of validity in measuring student learning as a prospective indicator of institutional effectiveness. Other chapters in this volume include reference to specific approaches to measuring student learning for accountability purposes, such as through standardized tests, authentic samples of student work,…
Error analysis and data reduction for interferometric surface measurements
NASA Astrophysics Data System (ADS)
Zhou, Ping
High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.
Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill
2015-07-27
The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology. PMID:26367631
The Relative Error Magnitude in Three Measures of Change.
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.
1982-01-01
Formulas for the standard error of measurement of three measures of change (simple differences; residualized difference scores; and a measure introduced by Tucker, Damarin, and Messick) are derived. A practical guide for determining the relative error of the three measures is developed. (Author/JKS)
ERIC Educational Resources Information Center
Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik
2015-01-01
The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…
Chromosomal locus tracking with proper accounting of static and dynamic errors.
Backlund, Mikael P; Joyner, Ryan; Moerner, W E
2015-06-01
The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors. PMID:26172745
Chromosomal locus tracking with proper accounting of static and dynamic errors
NASA Astrophysics Data System (ADS)
Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.
2015-06-01
The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors.
MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.
CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.
2004-07-05
The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Measurement of errors in clinical laboratories.
Agarwal, Rachna
2013-07-01
Laboratories have a major impact on patient safety as 80-90 % of all the diagnosis are made on the basis of laboratory tests. Laboratory errors have a reported frequency of 0.012-0.6 % of all test results. Patient safety is a managerial issue which can be enhanced by implementing active system to identify and monitor quality failures. This can be facilitated by reactive method which includes incident reporting followed by root cause analysis. This leads to identification and correction of weaknesses in policies and procedures in the system. Another way is proactive method like Failure Mode and Effect Analysis. In this focus is on entire examination process, anticipating major adverse events and pre-emptively prevent them from occurring. It is used for prospective risk analysis of high-risk processes to reduce the chance of errors in the laboratory and other patient care areas. PMID:24426216
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. PMID:26558345
NASA Astrophysics Data System (ADS)
Evin, Guillaume; Thyer, Mark; Kavetski, Dmitri; McInerney, David; Kuczera, George
2014-03-01
The paper appraises two approaches for the treatment of heteroscedasticity and autocorrelation in residual errors of hydrological models. Both approaches use weighted least squares (WLS), with heteroscedasticity modeled as a linear function of predicted flows and autocorrelation represented using an AR(1) process. In the first approach, heteroscedasticity and autocorrelation parameters are inferred jointly with hydrological model parameters. The second approach is a two-stage "postprocessor" scheme, where Stage 1 infers the hydrological parameters ignoring autocorrelation and Stage 2 conditionally infers the heteroscedasticity and autocorrelation parameters. These approaches are compared to a WLS scheme that ignores autocorrelation. Empirical analysis is carried out using daily data from 12 US catchments from the MOPEX set using two conceptual rainfall-runoff models, GR4J, and HBV. Under synthetic conditions, the postprocessor and joint approaches provide similar predictive performance, though the postprocessor approach tends to underestimate parameter uncertainty. However, the MOPEX results indicate that the joint approach can be nonrobust. In particular, when applied to GR4J, it often produces poor predictions due to strong multiway interactions between a hydrological water balance parameter and the error model parameters. The postprocessor approach is more robust precisely because it ignores these interactions. Practical benefits of accounting for error autocorrelation are demonstrated by analyzing streamflow predictions aggregated to a monthly scale (where ignoring daily-scale error autocorrelation leads to significantly underestimated predictive uncertainty), and by analyzing one-day-ahead predictions (where accounting for the error autocorrelation produces clearly higher precision and better tracking of observed data). Including autocorrelation into the residual error model also significantly affects calibrated parameter values and uncertainty estimates. The
Detecting errors and anomalies in computerized materials control and accountability databases
Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.
1998-12-31
The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.
NASA Astrophysics Data System (ADS)
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894
On modeling animal movements using Brownian motion with measurement error.
Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun
2014-02-01
Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation. PMID:24669719
Mode error analysis of impedance measurement using twin wires
NASA Astrophysics Data System (ADS)
Huang, Liang-Sheng; Yoshiro, Irie; Liu, Yu-Dong; Wang, Sheng
2015-03-01
Both longitudinal and transverse coupling impedance for some critical components need to be measured for accelerator design. The twin wires method is widely used to measure longitudinal and transverse impedance on the bench. A mode error is induced when the twin wires method is used with a two-port network analyzer. Here, the mode error is analyzed theoretically and an example analysis is given. Moreover, the mode error in the measurement is a few percent when a hybrid with no less than 25 dB isolation and a splitter with no less than 20 dB magnitude error are used. Supported by Natural Science Foundation of China (11175193, 11275221)
Accounting for data errors discovered from an audit in multiple linear regression.
Shepherd, Bryan E; Yu, Chang
2011-09-01
A data coordinating team performed onsite audits and discovered discrepancies between the data sent to the coordinating center and that recorded at sites. We present statistical methods for incorporating audit results into analyses. This can be thought of as a measurement error problem, where the distribution of errors is a mixture with a point mass at 0. If the error rate is nonzero, then even if the mean of the discrepancy between the reported and correct values of a predictor is 0, naive estimates of the association between two continuous variables will be biased. We consider scenarios where there are (1) errors in the predictor, (2) errors in the outcome, and (3) possibly correlated errors in the predictor and outcome. We show how to incorporate the error rate and magnitude, estimated from a random subset (the audited records), to compute unbiased estimates of association and proper confidence intervals. We then extend these results to multiple linear regression where multiple covariates may be incorrect in the database and the rate and magnitude of the errors may depend on study site. We study the finite sample properties of our estimators using simulations, discuss some practical considerations, and illustrate our methods with data from 2815 HIV-infected patients in Latin America, of whom 234 had their data audited using a sequential auditing plan. PMID:21281274
Gaye, Amadou; Burton, Thomas W. Y.; Burton, Paul R.
2015-01-01
Motivation: Very large studies are required to provide sufficiently big sample sizes for adequately powered association analyses. This can be an expensive undertaking and it is important that an accurate sample size is identified. For more realistic sample size calculation and power analysis, the impact of unmeasured aetiological determinants and the quality of measurement of both outcome and explanatory variables should be taken into account. Conventional methods to analyse power use closed-form solutions that are not flexible enough to cater for all of these elements easily. They often result in a potentially substantial overestimation of the actual power. Results: In this article, we describe the Estimating Sample-size and Power in R by Exploring Simulated Study Outcomes tool that allows assessment errors in power calculation under various biomedical scenarios to be incorporated. We also report a real world analysis where we used this tool to answer an important strategic question for an existing cohort. Availability and implementation: The software is available for online calculation and downloads at http://espresso-research.org. The code is freely available at https://github.com/ESPRESSO-research. Contact: louqman@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25908791
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Temperature error in radiation thermometry caused by emissivity and reflectance measurement error.
Corwin, R R; Rodenburghii, A
1994-04-01
A general expression for the temperature error caused by emissivity uncertainty is developed, and it is concluded that lower-wavelength systems provide significantly less temperature error. A technique to measure the normal emissivity is proposed that uses a normally incident light beam and an aperture to collect a portion of the energy reflected from the surface and to measure essentially both the specular component and the biangular reflectance at the edge of the aperture. The theoretical results show that the aperture size need not be substantial to provide reasonably low temperature errors for a broad class of materials and surface reflectance conditions. PMID:20885529
Using neural nets to measure ocular refractive errors: a proposal
NASA Astrophysics Data System (ADS)
Netto, Antonio V.; Ferreira de Oliveira, Maria C.
2002-12-01
We propose the development of a functional system for diagnosing and measuring ocular refractive errors in the human eye (astigmatism, hypermetropia and myopia) by automatically analyzing images of the human ocular globe acquired with the Hartmann-Schack (HS) technique. HS images are to be input into a system capable of recognizing the presence of a refractive error and outputting a measure of such an error. The system should pre-process and image supplied by the acquisition technique and then use artificial neural networks combined with fuzzy logic to extract the necessary information and output an automated diagnosis of the refractive errors that may be present in the ocular globe under exam.
Phase error compensation methods for high-accuracy profile measurement
NASA Astrophysics Data System (ADS)
Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun
2016-04-01
In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.
Measurement error in biomarkers: sources, assessment, and impact on studies.
White, Emily
2011-01-01
Measurement error in a biomarker refers to the error of a biomarker measure applied in a specific way to a specific population, versus the true (etiologic) exposure. In epidemiologic studies, this error includes not only laboratory error, but also errors (variations) introduced during specimen collection and storage, and due to day-to-day, month-to-month, and year-to-year within-subject variability of the biomarker. Validity and reliability studies that aim to assess the degree of biomarker error for use of a specific biomarker in epidemiologic studies must be properly designed to measure all of these sources of error. Validity studies compare the biomarker to be used in an epidemiologic study to a perfect measure in a group of subjects. The parameters used to quantify the error in a binary marker are sensitivity and specificity. For continuous biomarkers, the parameters used are bias (the mean difference between the biomarker and the true exposure) and the validity coefficient (correlation of the biomarker with the true exposure). Often a perfect measure of the exposure is not available, so reliability (repeatability) studies are conducted. These are analysed using kappa for binary biomarkers and the intraclass correlation coefficient for continuous biomarkers. Equations are given which use these parameters from validity or reliability studies to estimate the impact of nondifferential biomarker measurement error on the risk ratio in an epidemiologic study that will use the biomarker. Under nondifferential error, the attenuation of the risk ratio is towards the null and is often quite substantial, even for reasonably accurate biomarker measures. Differential biomarker error between cases and controls can bias the risk ratio in any direction and completely invalidate an epidemiologic study. PMID:22997860
The error analysis and online measurement of linear slide motion error in machine tools
NASA Astrophysics Data System (ADS)
Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.
2002-06-01
A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Contouring error compensation on a micro coordinate measuring machine
NASA Astrophysics Data System (ADS)
Fan, Kuang-Chao; Wang, Hung-Yu; Ye, Jyun-Kuan
2011-12-01
In recent years, three-dimensional measurements of nano-technology researches have received a great attention in the world. Based on the high accuracy demand, the error compensation of measurement machine is very important. In this study, a high precision Micro-CMM (coordinate measuring machine) has been developed which is composed of a coplanar stage for reducing the Abbé error in the vertical direction, the linear diffraction grating interferometer (LDGI) as the position feedback sensor in nanometer resolution, and ultrasonic motors for position control. This paper presents the error compensation strategy including "Home accuracy" and "Position accuracy" in both axes. For the home error compensation, we utilize a commercial DVD pick-up head and its S-curve principle to accurately search the origin of each axis. For the positioning error compensation, the absolute positions relative to the home are calibrated by laser interferometer and the error budget table is stored for feed forward error compensation. Contouring error can thus be compensated if both the compensation of both X and Y positioning errors are applied. Experiments show the contouring accuracy can be controlled to within 50nm after compensation.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Investigation of Measurement Errors in Doppler Global Velocimetry
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
1999-01-01
While the initial development phase of Doppler Global Velocimetry (DGV) has been successfully completed, there remains a critical next phase to be conducted, namely the determination of an error budget to provide quantitative bounds for measurements obtained by this technology. This paper describes a laboratory investigation that consisted of a detailed interrogation of potential error sources to determine their contribution to the overall DGV error budget. A few sources of error were obvious; e.g., iodine vapor adsorption lines, optical systems, and camera characteristics. However, additional non-obvious sources were also discovered; e.g., laser frequency and single-frequency stability, media scattering characteristics, and interference fringes. This paper describes each identified error source, its effect on the overall error budget, and where possible, corrective procedures to reduce or eliminate its effect.
Non-Gaussian Error Distributions of LMC Distance Moduli Measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Ratra, Bharat
2015-12-01
We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.
NASA Astrophysics Data System (ADS)
Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.
2015-12-01
Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Error tolerance of topological codes with independent bit-flip and measurement errors
NASA Astrophysics Data System (ADS)
Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.
2016-07-01
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Aerial measurement error with a dot planimeter: Some experimental estimates
NASA Technical Reports Server (NTRS)
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Space acceleration measurement system triaxial sensor head error budget
NASA Astrophysics Data System (ADS)
Thomas, John E.; Peters, Rex B.; Finley, Brian D.
1992-01-01
The objective of the Space Acceleration Measurement System (SAMS) is to measure and record the microgravity environment for a given experiment aboard the Space Shuttle. To accomplish this, SAMS uses remote triaxial sensor heads (TSH) that can be mounted directly on or near an experiment. The errors of the TSH are reduced by calibrating it before and after each flight. The associated error budget for the calibration procedure is discussed here.
Identification and Minimization of Errors in Doppler Global Velocimetry Measurements
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
2000-01-01
A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Measurement error caused by spatial misalignment in environmental epidemiology
Gryparis, Alexandros; Paciorek, Christopher J.; Zeka, Ariana; Schwartz, Joel; Coull, Brent A.
2009-01-01
In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area. PMID:18927119
Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior
Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor
2015-01-01
Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344
Errors Associated with the Direct Measurement of Radionuclides in Wounds
Hickman, D P
2006-03-02
Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy
Gil-Pita, Roberto
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.
Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
NASA Astrophysics Data System (ADS)
Fratini, G.; McDermitt, D. K.; Papale, D.
2013-08-01
Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (www.licor.com/eddypro).
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Errors and correction of precipitation measurements in China
NASA Astrophysics Data System (ADS)
Ren, Zhihua; Li, Mingqin
2007-05-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the “horizontal precipitation gauge” was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper. A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
Angular bias errors in three-component laser velocimeter measurements
Chen, C.Y.; Kim, P.J.; Walker, D.T.
1996-09-01
For three-component laser velocimeter systems, the change in projected area of the coincident measurement volume for different flow directions will introduce an angular bias in naturally sampled data. In this study, the effect of turbulence level and orientation of the measurement volumes on angular bias errors was examined. The operation of a typical three-component laser velocimeter was simulated using a Monte Carlo technique. Results for the specific configuration examined show that for turbulence levels less than 10% no significant bias errors in the mean velocities will occur and errors in the root-mean-square (r.m.s.) velocities will be less than 3% for all orientations. For turbulence levels less than 30%, component mean velocity bias errors less than 5% of the mean velocity vector magnitude can be attained with proper orientation of the measurement volume; however, the r.m.s. velocities may be in error as much as 10%. For turbulence levels above 50%, there is no orientation which will yield accurate estimates of all three mean velocities; component mean velocity errors as large as 15% of the mean velocity vector magnitude may be encountered.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835
Electrochemically modulated separations for material accountability measurements
Hazelton, Sandra G.; Liezers, Martin; Naes, Benjamin E.; Arrigo, Leah M.; Duckworth, Douglas C.
2012-07-08
A method for the accurate and timely analysis of accountable materials is critical for safeguards measurements in nuclear fuel reprocessing plants. Non-destructive analysis (NDA) methods, such as gamma spectroscopy, are desirable for their ability to produce near real-time data. However, the high gamma background of the actinides and fission products in spent nuclear fuel limits the use of NDA for real-time online measurements. A simple approach for at-line separation of materials would facilitate the use of at-line detection methods. A promising at-line separation method for plutonium and uranium is electrochemically modulated separations (EMS). Using an electrochemical cell with an anodized glassy carbon electrode, Pu and U oxidation states can be altered by applying an appropriate voltage. Because the affinity of the actinides for the electrode depends on their oxidation states, selective deposition can be turned “on” and “off” with changes in the applied target electrode voltage. A high surface-area cell was designed in house for the separation of Pu from spent nuclear fuel. The cell is shown to capture over 1 µg of material, increasing the likelihood for gamma spectroscopic detection of Pu extracted from dissolver solutions. The large surface area of the electrode also reduces the impact of competitive interferences from some fission products. Flow rates of up to 1 mL min-1 with >50% analyte deposition efficiency are possible, allowing for rapid separations to be effected. Results from the increased surface-area EMS cell are presented, including dilute dissolver solution simulant data.
A new indirect measure of diffusion model error
Kumar, A.; Morel, J. E.; Adams, M. L.
2013-07-01
We define a new indirect measure of the diffusion model error called the diffusion model error source. When this model error source is added to the diffusion equation, the transport solution for the angular-integrated intensity is obtained. This source represents a means by which a transport code can be used to generate information relating to the adequacy of diffusion theory for any given problem without actually solving the diffusion equation. The generation of this source does not relate in any way to acceleration of the iterative convergence of transport solutions. Perhaps the most well-known indirect measure of the diffusion model error is the variable-Eddington tensor. This tensor provides a great deal of information about the angular dependence of the angular intensity solution, but it is not always simple to interpret. In contrast, our diffusion model error source is a scalar that is conceptually easy to understand. In addition to defining the diffusion model error source analytically, we show how to generate this source numerically relative to the S{sub n} radiative transfer equations with linear-discontinuous spatial discretization. This numerical source is computationally tested and shown to reproduce the Sn solution for a Marshak-wave problem. (authors)
Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements
Majewski, M.S.
1997-01-01
Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.
50 CFR 648.103 - Summer flounder accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Summer flounder accountability measures... Management Measures for the Summer Flounder Fisheries § 648.103 Summer flounder accountability measures. (a... subsequent single fishing year recreational sector ACT. (d) Non-landing accountability measures, by...
NASA Astrophysics Data System (ADS)
Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael
2007-03-01
In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.
Objective and Subjective Refractive Error Measurements in Monkeys
Hung, Li-Fang; Ramamirtham, Ramkumar; Wensveen, Janice M.; Harwerth, Ronald S.; Smith, Earl L.
2011-01-01
Purpose To better understand the functional significance of refractive-error measures obtained using common objective methods in laboratory animals, we compared objective and subjective measures of refractive error in adolescent rhesus monkeys. Methods The subjects were 20 adolescent monkeys. Spherical-equivalent spectacle-plane refractive corrections were measured by retinoscopy and autorefraction while the animals were cyclopleged and anesthetized. The eye’s axial dimensions were measured by A-Scan ultrasonography. Subjective measures of the eye’s refractive state, with and without cycloplegia, were obtained using psychophysical methods. Specifically, we measured spatial contrast sensitivity as a function of spectacle lens power for relatively high spatial frequency gratings. The lens power that produced the highest contrast sensitivity was taken as the subjective refraction. Results Retinoscopy and autorefraction consistently yielded higher amounts of hyperopia relative to subjective measurements obtained with or without cycloplegia. The subjective refractions were not affected by cycloplegia and on average were 1.42 ± 0.61 D and 1.24 ± 0.62 D less hyperopic than the retinoscopy and autorefraction measurements, respectively. Repeating the retinoscopy and subjective measurements through 3 mm artificial pupils produced similar differences. Conclusions The results show that commonly used objective methods for assessing refractive errors in monkeys significantly overestimate the degree of hyperopia. It is likely that multiple factors contributed to the hyperopic bias associated with these objective measurements. However, the magnitude of the hyperopic bias was in general agreement with the “small-eye artifact” of retinoscopy. PMID:22198796
Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware
NASA Technical Reports Server (NTRS)
Winnitoy, Susan
2012-01-01
measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.
Wave-front measurement errors from restricted concentric subdomains.
Goldberg, K A; Geary, K
2001-09-01
In interferometry and optical testing, system wave-front measurements that are analyzed on a restricted subdomain of the full pupil can include predictable systematic errors. In nearly all cases, the measured rms wave-front error and the magnitudes of the individual aberration polynomial coefficients underestimate the wave-front error magnitudes present in the full-pupil domain. We present an analytic method to determine the relationships between the coefficients of aberration polynomials defined on the full-pupil domain and those defined on a restricted concentric subdomain. In this way, systematic wave-front measurement errors introduced by subregion selection are investigated. Using vector and matrix representations for the wave-front aberration coefficients, we generalize the method to the study of arbitrary input wave fronts and subdomain sizes. While wave-front measurements on a restricted subdomain are insufficient for predicting the wave front of the full-pupil domain, studying the relationship between known full-pupil wave fronts and subdomain wave fronts allows us to set subdomain size limits for arbitrary measurement fidelity. PMID:11551047
Optimal measurement strategies for effective suppression of drift errors.
Yashchuk, Valeriy V
2009-11-01
Drifting of experimental setups with change in temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental setups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler. PMID:19947751
Optimal measurement strategies for effective suppression of drift errors
Yashchuk, Valeriy V.
2009-04-16
Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.
Estimation of discretization errors in contact pressure measurements.
Fregly, Benjamin J; Sawyer, W Gregory
2003-04-01
Contact pressure measurements in total knee replacements are often made using a discrete sensor such as the Tekscan K-Scan sensor. However, no method currently exists for predicting the magnitude of sensor discretization errors in contact force, peak pressure, average pressure, and contact area, making it difficult to evaluate the accuracy of such measurements. This study identifies a non-dimensional area variable, defined as the ratio of the number of perimeter elements to the total number of elements with pressure, which can be used to predict these errors. The variable was evaluated by simulating discrete pressure sensors subjected to Hertzian and uniform pressure distributions with two different calibration procedures. The simulations systematically varied the size of the sensor elements, the contact ellipse aspect ratio, and the ellipse's location on the sensor grid. In addition, contact pressure measurements made with a K-Scan sensor on four different total knee designs were used to evaluate the magnitude of discretization errors under practical conditions. The simulations predicted a strong power law relationship (r(2)>0.89) between worst-case discretization errors and the proposed non-dimensional area variable. In the total knee experiments, predicted discretization errors were on the order of 1-4% for contact force and peak pressure and 3-9% for average pressure and contact area. These errors are comparable to those arising from inserting a sensor into the joint space or truncating pressures with pressure sensitive film. The reported power law regression coefficients provide a simple way to estimate the accuracy of experimental measurements made with discrete pressure sensors when the contact patch is approximately elliptical. PMID:12600352
The effect of measurement error on surveillance metrics
Weaver, Brian Phillip; Hamada, Michael S.
2012-04-24
The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.
Three Approximations of Standard Error of Measurement: An Empirical Approach.
ERIC Educational Resources Information Center
Garvin, Alfred D.
Three successively simpler formulas for approximating the standard error of measurement were derived by applying successively more simplifying assumptions to the standard formula based on the standard deviation and the Kuder-Richardson formula 20 estimate of reliability. The accuracy of each of these three formulas, with respect to the standard…
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. PMID:27416840
Comparing measurement errors for formants in synthetic and natural vowels.
Shadle, Christine H; Nam, Hosung; Whalen, D H
2016-02-01
The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555
Error Correction for Foot Clearance in Real-Time Measurement
NASA Astrophysics Data System (ADS)
Wahab, Y.; Bakar, N. A.; Mazalan, M.
2014-04-01
Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.
Errors in ellipsometry measurements made with a photoelastic modulator
Modine, F.A.; Jellison, G.E. Jr; Gruzalski, G.R.
1983-07-01
The equations governing ellipsometry measurements made with a photoelastic modulator are presented in a simple but general form. These equations are used to study the propagation of both systematic and random errors, and an assessment of the accuracy of the ellipsometer is made. A basis is provided for choosing among various ellipsommeter configurations, measurement procedures, and methods of data analysis. Several new insights into the performance of this type of ellipsometer are supplied.
Effects of measurement errors on microwave antenna holography
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Rahmat-Samii, Yahya
1991-01-01
The effects of measurement errors appearing during the implementation of the microwave holographic technique are investigated in detail, and many representative results are presented based on computer simulations. The numerical results are tailored for cases applicable to the utilization of the holographic technique for the NASA's Deep Space Network antennas, although the methodology of analysis is applicable to any antenna. Many system measurement topics are presented and summarized.
Error reduction in gamma-spectrometric measurements of nuclear materials enrichment
NASA Astrophysics Data System (ADS)
Zaplatkina, D.; Semenov, A.; Tarasova, E.; Zakusilov, V.; Kuznetsov, M.
2016-06-01
The paper provides the analysis of the uncertainty in determining the uranium samples enrichment using non-destructive methods to ensure the functioning of the nuclear materials accounting and control system. The measurements were performed by a scintillation detector based on a sodium iodide crystal and the semiconductor germanium detector. Samples containing uranium oxide of different masses were used for the measurements. Statistical analysis of the results showed that the maximum enrichment error in a scintillation detector measurement can reach 82%. The bias correction, calculated from the data obtained by the semiconductor detector, reduces the error in the determination of uranium enrichment by 47.2% in average. Thus, the use of bias correction, calculated by the statistical methods, allows the use of scintillation detectors to account and control nuclear materials.
Estimation of coherent error sources from stabilizer measurements
NASA Astrophysics Data System (ADS)
Orsucci, Davide; Tiersch, Markus; Briegel, Hans J.
2016-04-01
In the context of measurement-based quantum computation a way of maintaining the coherence of a graph state is to measure its stabilizer operators. Aside from performing quantum error correction, it is possible to exploit the information gained from these measurements to characterize and then counteract a coherent source of errors; that is, to determine all the parameters of an error channel that applies a fixed—but unknown—unitary operation to the physical qubits. Such a channel is generated, e.g., by local stray fields that act on the qubits. We study the case in which each qubit of a given graph state may see a different error channel and we focus on channels given by a rotation on the Bloch sphere around either the x ̂, the y ̂, or the z ̂ axis, for which analytical results can be given in a compact form. The possibility of reconstructing the channels at all qubits depends nontrivially on the topology of the graph state. We prove via perturbation methods that the reconstruction process is robust and supplement the analytic results with numerical evidence.
NASA Astrophysics Data System (ADS)
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Moroni, Rossana; Blomstedt, Paul; Wilhelm, Lars; Reinikainen, Tapani; Sippola, Erkki; Corander, Jukka
2010-10-10
Headspace gas chromatographic measurements of ethanol content in blood specimens from suspect drunk drivers are routinely carried out in forensic laboratories. In the widely established standard statistical framework, measurement errors in such data are represented by Gaussian distributions for the population of blood specimens at any given level of ethanol content. It is known that the variance of measurement errors increases as a function of the level of ethanol content and the standard statistical approach addresses this issue by replacing the unknown population variances by estimates derived from large sample using a linear regression model. Appropriate statistical analysis of the systematic and random components in the measurement errors is necessary in order to guarantee legally sound security corrections reported to the police authority. Here we address this issue by developing a novel statistical approach that takes into account any potential non-linearity in the relationship between the level of ethanol content and the variability of measurement errors. Our method is based on standard non-parametric kernel techniques for density estimation using a large database of laboratory measurements for blood specimens. Furthermore, we address also the issue of systematic errors in the measurement process by a statistical model that incorporates the sign of the error term in the security correction calculations. Analysis of a set of certified reference materials (CRMs) blood samples demonstrates the importance of explicitly handling the direction of the systematic errors in establishing the statistical uncertainty about the true level of ethanol content. Use of our statistical framework to aid quality control in the laboratory is also discussed. PMID:20494532
Surface measurement errors using commercial scanning white light interferometers
NASA Astrophysics Data System (ADS)
Gao, F.; Leach, R. K.; Petzing, J.; Coupland, J. M.
2008-01-01
This paper examines the performance of commercial scanning white light interferometers in a range of measurement tasks. A step height artefact is used to investigate the response of the instruments at a discontinuity, while gratings with sinusoidal and rectangular profiles are used to investigate the effects of surface gradient and spatial frequency. Results are compared with measurements made with tapping mode atomic force microscopy and discrepancies are discussed with reference to error mechanisms put forward in the published literature. As expected, it is found that most instruments report errors when used in regions close to a discontinuity or those with a surface gradient that is large compared to the acceptance angle of the objective lens. Amongst other findings, however, we report systematic errors that are observed when the surface gradient is considerably smaller. Although these errors are typically less than the mean wavelength, they are significant compared to the vertical resolution of the instrument and indicate that current scanning white light interferometers should be used with some caution if sub-wavelength accuracy is required.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Putting reward in art: A tentative prediction error account of visual art
Van de Cruys, Sander; Wagemans, Johan
2011-01-01
The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260
Systematic errors in precipitation measurements with different rain gauge sensors
NASA Astrophysics Data System (ADS)
Sungmin, O.; Foelsche, Ulrich
2015-04-01
Ground-level rain gauges provide the most direct measurement of precipitation and therefore such precipitation measurement datasets are often utilized for the evaluation of precipitation estimates via remote sensing and in climate model simulations. However, measured precipitation by means of national standard gauge networks is constrained by their spatial density. For this reason, in order to accurately measure precipitation it is of essential importance to understand the performance and reliability of rain gauges. This study is aimed to assess the systematic errors between measurements taken with different rain gauge sensors. We will mainly address extreme precipitation events as these are connected with high uncertainties in the measurements. Precipitation datasets for the study are available from WegenerNet, a dense network of 151 meteorological stations within an area of about 20 km × 15 km centred near the city of Feldbach in the southeast of Austria. The WegenerNet has a horizontal resolution of about 1.4-km and employs 'tripping bucket' rain gauges for precipitation measurements with three different types of sensors; a reference station provides measurements from all types of sensors. The results will illustrate systematic errors via the comparison of the precipitation datasets gained with different types of sensors. The analyses will be carried out by direct comparison between the datasets from the reference station. In addition, the dependence of the systematic errors on meteorological conditions, e.g. precipitation intensity and wind speed, will be investigated to assess the feasibility of applying the WegenerNet datasets for the study of extreme precipitation events. The study can be regarded as a pre-processing research to further studies in hydro-meteorological applications, which require high-resolution precipitation datasets, such as satellite/radar-derived precipitation validation and hydrodynamic modelling.
Minimax Mean-Squared Error Location Estimation Using TOA Measurements
NASA Astrophysics Data System (ADS)
Shen, Chih-Chang; Chang, Ann-Chen
This letter deals with mobile location estimation based on a minimax mean-squared error (MSE) algorithm using time-of-arrival (TOA) measurements for mitigating the nonline-of-sight (NLOS) effects in cellular systems. Simulation results are provided for illustrating the minimax MSE estimator yields good performance than the other least squares and weighted least squares estimators under relatively low signal-to-noise ratio and moderately NLOS conditions.
Detecting correlated errors in state-preparation-and-measurement tomography
NASA Astrophysics Data System (ADS)
Jackson, Christopher; van Enk, S. J.
2015-10-01
Whereas in standard quantum-state tomography one estimates an unknown state by performing various measurements with known devices, and whereas in detector tomography one estimates the positive-operator-valued-measurement elements of a measurement device by subjecting to it various known states, we consider here the case of SPAM (state preparation and measurement) tomography where neither the states nor the measurement device are assumed known. For d -dimensional systems measured by d -outcome detectors, we find there are at most d2(d2-1 ) "gauge" parameters that can never be determined by any such experiment, irrespective of the number of unknown states and unknown devices. For the case d =2 we find gauge-invariant quantities that can be accessed directly experimentally and that can be used to detect and describe SPAM errors. In particular, we identify conditions whose violations detect the presence of correlations between SPAM errors. From the perspective of SPAM tomography, standard quantum-state tomography and detector tomography are protocols that fix the gauge parameters through the assumption that some set of fiducial measurements is known or that some set of fiducial states is known, respectively.
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
ERIC Educational Resources Information Center
Steinhauser, Marco; Maier, Martin; Hubner, Ronald
2008-01-01
The present study investigated the mechanisms underlying error detection in the error signaling response. The authors tested between a response monitoring account and a conflict monitoring account. By implementing each account within the neural network model of N. Yeung, M. M. Botvinick, and J. D. Cohen (2004), they demonstrated that both accounts…
Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F
2015-11-01
Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934
Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework
Singh, Hardeep; Sittig, Dean F
2015-01-01
Diagnostic errors are major contributors to harmful patient outcomes, yet they remain a relatively understudied and unmeasured area of patient safety. Although they are estimated to affect about 12 million Americans each year in ambulatory care settings alone, both the conceptual and pragmatic scientific foundation for their measurement is under-developed. Health care organizations do not have the tools and strategies to measure diagnostic safety and most have not integrated diagnostic error into their existing patient safety programs. Further progress toward reducing diagnostic errors will hinge on our ability to overcome measurement-related challenges. In order to lay a robust groundwork for measurement and monitoring techniques to ensure diagnostic safety, we recently developed a multifaceted framework to advance the science of measuring diagnostic errors (The Safer Dx framework). In this paper, we describe how the framework serves as a conceptual foundation for system-wide safety measurement, monitoring and improvement of diagnostic error. The framework accounts for the complex adaptive sociotechnical system in which diagnosis takes place (the structure), the distributed process dimensions in which diagnoses evolve beyond the doctor's visit (the process) and the outcomes of a correct and timely “safe diagnosis” as well as patient and health care outcomes (the outcomes). We posit that the Safer Dx framework can be used by a variety of stakeholders including researchers, clinicians, health care organizations and policymakers, to stimulate both retrospective and more proactive measurement of diagnostic errors. The feedback and learning that would result will help develop subsequent interventions that lead to safer diagnosis, improved value of health care delivery and improved patient outcomes. PMID:25589094
Uncertainty in measurement and total error - are they so incompatible?
Farrance, Ian; Badrick, Tony; Sikaris, Kenneth A
2016-08-01
There appears to be a growing debate with regard to the use of "Westgard style" total error and "GUM style" uncertainty in measurement. Some may argue that the two approaches are irreconcilable. The recent appearance of an article "Quality goals at the crossroads: growing, going, or gone" on the well-regarded Westgard Internet site requires some comment. In particular, a number of assertions which relate to ISO 15189 and uncertainty in measurement appear misleading. An alternate view of the key issues raised by Westergard may serve to guide and enlighten others who may accept such statements at face value. PMID:27227711
Considering Measurement Model Parameter Errors in Static and Dynamic Systems
NASA Astrophysics Data System (ADS)
Woodbury, Drew P.; Majji, Manoranjan; Junkins, John L.
2011-07-01
In static systems, state values are estimated using traditional least squares techniques based on a redundant set of measurements. Inaccuracies in measurement model parameter estimates can lead to significant errors in the state estimates. This paper describes a technique that considers these parameters in a modified least squares framework. It is also shown that this framework leads to the minimum variance solution. Both batch and sequential (recursive) least squares methods are described. One static system and one dynamic system are used as examples to show the benefits of the consider least squares methodology.
50 CFR 648.293 - Tilefish accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...
50 CFR 648.293 - Tilefish accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...
50 CFR 648.143 - Black sea bass Accountability Measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a... based on dealer reports, state data, and other available information. All black sea bass landed for...
50 CFR 648.143 - Black sea bass Accountability Measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a... based on dealer reports, state data, and other available information. All black sea bass landed for...
50 CFR 648.143 - Black sea bass Accountability Measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a... based on dealer reports, state data, and other available information. All black sea bass landed for...
50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...
50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... quota described in § 648.232 will be harvested and shall close the EEZ to fishing for spiny dogfish...
50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...
50 CFR 648.123 - Scup accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Scup accountability measures. 648.123 Section 648.123 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period...
50 CFR 648.123 - Scup accountability measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Scup accountability measures. 648.123 Section 648.123 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period...
50 CFR 648.123 - Scup accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Scup accountability measures. 648.123 Section 648.123 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period...
Error reduction techniques for measuring long synchrotron mirrors
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
Factors Affecting Blood Glucose Monitoring: Sources of Errors in Measurement
Ginsberg, Barry H.
2009-01-01
Glucose monitoring has become an integral part of diabetes care but has some limitations in accuracy. Accuracy may be limited due to strip manufacturing variances, strip storage, and aging. They may also be due to limitations on the environment such as temperature or altitude or to patient factors such as improper coding, incorrect hand washing, altered hematocrit, or naturally occurring interfering substances. Finally, exogenous interfering substances may contribute errors to the system evaluation of blood glucose. In this review, I discuss the measurement of error in blood glucose, the sources of error, and their mechanism and potential solutions to improve accuracy in the hands of the patient. I also discuss the clinical measurement of system accuracy and methods of judging the suitability of clinical trials and finally some methods of overcoming the inaccuracies. I have included comments about additional information or education that could be done today by manufacturers in the appropriate sections. Areas that require additional work are discussed in the final section. PMID:20144340
Error analysis and modeling for the time grating length measurement system
NASA Astrophysics Data System (ADS)
Gao, Zhonghua; Fen, Jiqin; Zheng, Fangyan; Chen, Ziran; Peng, Donglin; Liu, Xiaokang
2013-10-01
Through analyzing errors of the length measurement system in which a linear time grating was the principal measuring component, we found that the study on the error law was very important to reduce system errors and optimize the system structure. Mainly error sources in the length measuring system, including the time grating sensor, slide way, and cantilever, were studied; and therefore total errors were obtained. Meanwhile we erected the mathematic model of errors of the length measurement system. Using the error model, we calibrated system errors being in the length measurement system. Also, we developed a set of experimental devices in which a laser interferometer was used to calibrate the length measurement system errors. After error calibrating, the accuracy of the measurement system was improved from original 36um/m to 14um/m. The fact that experiment results are consistent with the simulation results shows that the error mathematic model is suitable for the length measuring system.
50 CFR 622.49 - Accountability measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF, AND SOUTH ATLANTIC Management Measures.... (5) Black sea bass—(i) Commercial fishery. If commercial landings, as estimated by the SRD, reach or... the recreational ACL of 409,000 lb (185,519 kg), gutted weight, and black sea bass are...
Improving optical bench radius measurements using stage error motion data
Schmitz, Tony L.; Gardner, Neil; Vaughn, Matthew; Medicus, Kate; Davies, Angela
2008-12-20
We describe the application of a vector-based radius approach to optical bench radius measurements in the presence of imperfect stage motions. In this approach, the radius is defined using a vector equation and homogeneous transformation matrix formulism. This is in contrast to the typical technique, where the displacement between the confocal and cat's eye null positions alone is used to determine the test optic radius. An important aspect of the vector-based radius definition is the intrinsic correction for measurement biases, such as straightness errors in the stage motion and cosine misalignment between the stage and displacement gauge axis, which lead to an artificially small radius value if the traditional approach is employed. Measurement techniques and results are provided for the stage error motions, which are then combined with the setup geometry through the analysis to determine the radius of curvature for a spherical artifact. Comparisons are shown between the new vector-based radius calculation, traditional radius computation, and a low uncertainty mechanical measurement. Additionally, the measurement uncertainty for the vector-based approach is determined using Monte Carlo simulation and compared to experimental results.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval
ERIC Educational Resources Information Center
Beauducel, Andre
2013-01-01
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Data Reconciliation and Gross Error Detection: A Filtered Measurement Test
Himour, Y.
2008-06-12
Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.
Analysis of Spherical Form Errors to Coordinate Measuring Machine Data
NASA Astrophysics Data System (ADS)
Chen, Mu-Chen
Coordinates measuring machines (CMMs) are commonly utilized to take measurement data from manufactured surfaces for inspection purposes. The measurement data are then used to evaluate the geometric form errors associated with the surface. Traditionally, the evaluation of spherical form errors involves an optimization process of fitting a substitute sphere to the sampled points. This paper proposes the computational strategies for sphericity with respect to ASME Y14.5M-1994 standard. The proposed methods consider the trade-off between the accuracy of sphericity and the efficiency of inspection. Two approaches of computational metrology based on genetic algorithms (GAs) are proposed to explore the optimality of sphericity measurements and the sphericity feasibility analysis, respectively. The proposed algorithms are verified by using several CMM data sets. Observing from the computational results, the proposed algorithms are practical for on-line implementation to the sphericity evaluation. Using the GA-based computational techniques, the accuracy of sphericity assessment and the efficiency of sphericity feasibility analysis are agreeable.
Performance-Based Measurement: Action for Organizations and HPT Accountability
ERIC Educational Resources Information Center
Larbi-Apau, Josephine A.; Moseley, James L.
2010-01-01
Basic measurements and applications of six selected general but critical operational performance-based indicators--effectiveness, efficiency, productivity, profitability, return on investment, and benefit-cost ratio--are presented. With each measurement, goals and potential impact are explored. Errors, risks, limitations to measurements, and a…
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time. PMID:19964394
Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy
NASA Astrophysics Data System (ADS)
Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid
2015-07-01
Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following:
This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
50 CFR 660.509 - Accountability measures (season closures).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...
50 CFR 660.509 - Accountability measures (season closures).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...
50 CFR 660.509 - Accountability measures (season closures).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions
Corlett, P. R.; Murray, G. K.; Honey, G. D.; Aitken, M. R. F.; Shanks, D. R.; Robbins, T.W.; Bullmore, E.T.; Dickinson, A.; Fletcher, P. C.
2012-01-01
Delusions are maladaptive beliefs about the world. Based upon experimental evidence that prediction error—a mismatch between expectancy and outcome—drives belief formation, this study examined the possibility that delusions form because of disrupted prediction-error processing. We used fMRI to determine prediction-error-related brain responses in 12 healthy subjects and 12 individuals (7 males) with delusional beliefs. Frontal cortex responses in the patient group were suggestive of disrupted prediction-error processing. Furthermore, across subjects, the extent of disruption was significantly related to an individual’s propensity to delusion formation. Our results support a neurobiological theory of delusion formation that implicates aberrant prediction-error signalling, disrupted attentional allocation and associative learning in the formation of delusional beliefs. PMID:17690132
NASA Astrophysics Data System (ADS)
Song, Qing; Zhang, Chunsong; Huang, Jiayong; Wu, Di; Liu, Jing
2009-11-01
The error source of the external diameter measurement system based on the double optical path parallel light projection method are the non-parallelism of the double optical path, aberration distortion of the projection lens, the edge of the projection profile of the cylinder which is affected by aperture size of the illuminating beam, light intensity variation and the counting error in the circuit. The screw pair drive is applied to achieve the up-and-down movement in the system. The precision of up-and-down movement mainly lies on the Abbe Error which is caused by the offset between the centerline and the mobile line of the capacitive-gate ruler, the heeling error of the guide mechanism, and the error which is caused by the dilatometric change of parts resulted from the temperature change. Rotary mechanism is achieved by stepper motor and gear drive. The precision of the rotary mechanism is determined by the stepping angle error of the stepper motor, the gear transmission error, and the heeling error of the piston relative to the rotation axis. The method of error modification is putting a component in the optical path to get the error curve, which is then used in the point-by-point modification by software compensation.
Effects of measurement error on estimating biological half-life
Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. )
1992-10-01
Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Errors in Potassium Measurement: A Laboratory Perspective for the Clinician
Asirvatham, Jaya R; Moses, Viju; Bjornson, Loring
2013-01-01
Errors in potassium measurement can cause pseudohyperkalemia, where serum potassium is falsely elevated. Usually, these are recognized either by the laboratory or the clinician. However, the same factors that cause pseudohyperkalemia can mask hypokalemia by pushing measured values into the reference interval. These cases require a high-index of suspicion by the clinician as they cannot be easily identified in the laboratory. This article discusses the causes and mechanisms of spuriously elevated potassium, and current recommendations to minimize those factors. “Reverse” pseudohyperkalemia and the role of correction factors are also discussed. Relevant articles were identified by a literature search performed on PubMed using the terms “pseudohyperkalemia,” “reverse pseudohyperkalemia,” “factitious hyperkalemia,” “spurious hyperkalemia,” and “masked hypokalemia.” PMID:23724399
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.