Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2014-04-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2013-10-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
Pike, D.H.; Morrison, G.W.; Downing, D.J.
1982-04-01
It has been shown in previous work that the Kalman Filter and Linear Smoother produces optimal estimates of inventory and loss from a material balance area. The assumptions of the Kalman Filter/Linear Smoother approach assume no correlation between inventory measurement error nor does it allow for serial correlation in these measurement errors. The purpose of this report is to extend the previous results by relaxing these assumptions to allow for correlation of measurement errors. The results show how to account for correlated measurement errors in the linear system model of the Kalman Filter/Linear Smoother. An algorithm is also included for calculating the required error covariance matrices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.256 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
40 CFR 96.156 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...
Code of Federal Regulations, 2010 CFR
2010-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2012 CFR
2012-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2014 CFR
2014-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2013 CFR
2013-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2011 CFR
2011-07-01
... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Season Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance...
40 CFR 60.4156 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking...
40 CFR 60.4156 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.356 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Ozone Season Allowance Tracking System § 97.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking System account. Within...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.256 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance Tracking System § 97.256... any error in any CAIR SO2 Allowance Tracking System account. Within 10 business days of making...
40 CFR 97.156 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR NOX Allowance Tracking System § 97.156... any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...
Tenan, Matthew S.
2016-01-01
Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device. PMID:27242546
Tenan, Matthew S
2016-01-01
Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device. PMID:27242546
40 CFR 97.427 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.427 Section 97.427 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Annual Trading Program §...
40 CFR 97.427 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.427 Section 97.427 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Annual Trading Program §...
40 CFR 97.427 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.427 Section 97.427 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Annual Trading Program §...
40 CFR 97.527 - Account error.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.527 Section 97.527 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program §...
40 CFR 97.527 - Account error.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.527 Section 97.527 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program §...
40 CFR 97.527 - Account error.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.527 Section 97.527 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS TR NOX Ozone Season Trading Program §...
NASA Astrophysics Data System (ADS)
Henderson, Robert K.
1999-12-01
It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.
Measuring Test Measurement Error: A General Approach
ERIC Educational Resources Information Center
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2013-01-01
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025
Measurement Errors in Organizational Surveys.
ERIC Educational Resources Information Center
Dutka, Solomon; Frankel, Lester R.
1993-01-01
Describes three classes of measurement techniques: (1) interviewing methods; (2) record retrieval procedures; and (3) observation methods. Discusses primary reasons for measurement error. Concludes that, although measurement error can be defined and controlled for, there are other design factors that also must be considered. (CFR)
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
40 CFR 96.356 - Account error.
Code of Federal Regulations, 2011 CFR
2011-07-01
... TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS FOR STATE IMPLEMENTATION PLANS CAIR NOX Ozone Season... on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...
Evaluation of accountability measurements
Cacic, C.G.
1988-01-01
The New Brunswick Laboratory (NBL) is programmatically responsible to the U.S. Department of Energy (DOE) Office of Safeguards and Security (OSS) for providing independent review and evaluation of accountability measurement technology in DOE nuclear facilities. This function is addressed in part through the NBL Safegaurds Measurement Evaluation (SME) Program. The SME Program utilizes both on-site review of measurement methods along with material-specific measurement evaluation studies to provide information concerning the adequacy of subject accountability measurements. This paper reviews SME Program activities for the 1986-87 time period, with emphasis on noted improvements in measurement capabilities. Continued evolution of the SME Program to respond to changing safeguards concerns is discussed.
Measurements and material accounting
Hammond, G.A. )
1989-11-01
The DOE role for the NBL in safeguarding nuclear material into the 21st century is discussed. Development of measurement technology and reference materials supporting requirements of SDI, SIS, AVLIS, pyrochemical reprocessing, fusion, waste storage, plant modernization program, and improved tritium accounting are some of the suggested examples.
Pendulum Shifts, Context, Error, and Personal Accountability
Harold Blackman; Oren Hester
2011-09-01
This paper describes a series of tools that were developed to achieve a balance in under-standing LOWs and the human component of events (including accountability) as the INL continues its shift to a learning culture where people report, are accountable and interested in making a positive difference - and want to report because information is handled correctly and the result benefits both the reporting individual and the organization. We present our model for understanding these interrelationships; the initiatives that were undertaken to improve overall performance.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Performance testing accountability measurements
Oldham, R.D.; Mitchell, W.G.; Spaletto, M.I.
1993-12-31
The New Brunswick Laboratory (NBL) provides assessment support to the DOE Operations Offices in the area of Material Control and Accountability (MC and A). During surveys of facilities, the Operations Offices have begun to request from NBL either assistance in providing materials for performance testing of accountability measurements or both materials and personnel to do performance testing. To meet these needs, NBL has developed measurement and measurement control performance test procedures and materials. The present NBL repertoire of performance tests include the following: (1) mass measurement performance testing procedures using calibrated and traceable test weights, (2) uranium elemental concentration (assay) measurement performance tests which use ampulated solutions of normal uranyl nitrate containing approximately 7 milligrams of uranium per gram of solution, and (3) uranium isotopic measurement performance tests which use ampulated uranyl nitrate solutions with enrichments ranging from 4% to 90% U-235. The preparation, characterization, and packaging of the uranium isotopic and assay performance test materials were done in cooperation with the NBL Safeguards Measurements Evaluation Program since these materials can be used for both purposes.
Accountability Measures Report, 2007
ERIC Educational Resources Information Center
North Dakota University System, 2007
2007-01-01
This document is a tool for demonstrating that the University System is meeting the "flexibility with accountability" expectations of SB 2003 passed by the 2001 Legislative Assembly. The 2007 report reflects some of the many ways North Dakota University System (NDUS) colleges and universities are developing the human capital needed to create a…
Accountability Measures Report, 2006
ERIC Educational Resources Information Center
North Dakota University System, 2006
2006-01-01
This document is a valuable tool for demonstrating that the University System is meeting the "flexibility with accountability" expectations of SB 2003 passed by the 2001 Legislative Assembly. The 2006 report reflects some of the many ways North Dakota University System (NDUS) colleges and universities are developing the human capital needed to…
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-06-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-04-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
Accounting for Errors in Model Analysis Theory: A Numerical Approach
NASA Astrophysics Data System (ADS)
Sommer, Steven R.; Lindell, Rebecca S.
2004-09-01
By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Rapid mapping of volumetric machine errors using distance measurements
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
Conditional Standard Error of Measurement in Prediction.
ERIC Educational Resources Information Center
Woodruff, David
1990-01-01
A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…
Minimizing noise-temperature measurement errors
NASA Technical Reports Server (NTRS)
Stelzried, C. T.
1992-01-01
An analysis of noise-temperature measurement errors of low-noise amplifiers was performed. Results of this analysis can be used to optimize measurement schemes for minimum errors. For the cases evaluated, the effective noise temperature (Te) of a Ka-band maser can be measured most accurately by switching between an ambient and a 2-K cooled load without an isolation attenuator. A measurement accuracy of 0.3 K was obtained for this example.
Performance measurement: the new accountability.
Martin, L L; Kettner, P M
1997-01-01
Over the years, "accountability" in the human services has focused upon issues such as the legal framework, organizational management, financial responsibility, political concerns, and client inputs and expectations. Within the past decade, the meaning of "accountability" has been extended to the more dynamic organizational functions of "efficiency" and " effectiveness." Efficiency and effectiveness increasingly must be put to the tests of performance measurement and outcome evaluation. Forces outside the social work profession, including, among others, federal expectations and initiatives and the increased implementation of the concept of managed care, will ensure that efficiency and effectiveness will be central and highlighted concerns far into the future. This "new accountability" is demanded by the stakeholders in the nonprofit sector and by federal requirements built into the planning, funding, and implementation processes for nonprofits and for-profits alike. PMID:10166757
Read, Randy J; McCoy, Airlie J
2016-03-01
The crystallographic diffraction experiment measures Bragg intensities; crystallographic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978), Acta Cryst. A35, 517-525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124
Read, Randy J.; McCoy, Airlie J.
2016-01-01
The crystallographic diffraction experiment measures Bragg intensities; crystallographic electron-density maps and other crystallographic calculations in phasing require structure-factor amplitudes. If data were measured with no errors, the structure-factor amplitudes would be trivially proportional to the square roots of the intensities. When the experimental errors are large, and especially when random errors yield negative net intensities, the conversion of intensities and their error estimates into amplitudes and associated error estimates becomes nontrivial. Although this problem has been addressed intermittently in the history of crystallographic phasing, current approaches to accounting for experimental errors in macromolecular crystallography have numerous significant defects. These have been addressed with the formulation of LLGI, a log-likelihood-gain function in terms of the Bragg intensities and their associated experimental error estimates. LLGI has the correct asymptotic behaviour for data with large experimental error, appropriately downweighting these reflections without introducing bias. LLGI abrogates the need for the conversion of intensity data to amplitudes, which is usually performed with the French and Wilson method [French & Wilson (1978 ▸), Acta Cryst. A35, 517–525], wherever likelihood target functions are required. It has general applicability for a wide variety of algorithms in macromolecular crystallography, including scaling, characterizing anisotropy and translational noncrystallographic symmetry, detecting outliers, experimental phasing, molecular replacement and refinement. Because it is impossible to reliably recover the original intensity data from amplitudes, it is suggested that crystallographers should always deposit the intensity data in the Protein Data Bank. PMID:26960124
Protecting weak measurements against systematic errors
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-07-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
Reducing Measurement Error in Student Achievement Estimation
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2008-01-01
The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…
Measurement error analysis of taxi meter
NASA Astrophysics Data System (ADS)
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
38 CFR 2.7 - Delegation of authority to provide relief on account of administrative error.
Code of Federal Regulations, 2010 CFR
2010-07-01
... to provide relief on account of administrative error. 2.7 Section 2.7 Pensions, Bonuses, and Veterans... relief on account of administrative error. (a) Section 503(a) of title 38 U.S.C., provides that if the... by reason of administrative error on the part of the Federal Government or any of its employees,...
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
Neutron multiplication error in TRU waste measurements
Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B
2015-09-18
There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. PMID:26296855
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J
2014-11-10
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Algorithmic Error Correction of Impedance Measuring Sensors
Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira
2009-01-01
This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177
Sources of Error in UV Radiation Measurements
Larason, Thomas C.; Cromer, Christopher L.
2001-01-01
Increasing commercial, scientific, and technical applications involving ultraviolet (UV) radiation have led to the demand for improved understanding of the performance of instrumentation used to measure this radiation. There has been an effort by manufacturers of UV measuring devices (meters) to produce simple, optically filtered sensor systems to accomplish the varied measurement needs. We address common sources of measurement errors using these meters. The uncertainty in the calibration of the instrument depends on the response of the UV meter to the spectrum of the sources used and its similarity to the spectrum of the quantity to be measured. In addition, large errors can occur due to out-of-band, non-linear, and non-ideal geometric or spatial response of the UV meters. Finally, in many applications, how well the response of the UV meter approximates the presumed action spectrum needs to be understood for optimal use of the meters.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. PMID:27566773
Risk, Error and Accountability: Improving the Practice of School Leaders
ERIC Educational Resources Information Center
Perry, Lee-Anne
2006-01-01
This paper seeks to explore the notion of risk as an organisational logic within schools, the impact of contemporary accountability regimes on managing risk and then, in turn, to posit a systems-based process of risk management underpinned by a positive logic of risk. It moves through a number of steps beginning with the development of an…
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Generalized Geometric Error Correction in Coordinate Measurement
NASA Astrophysics Data System (ADS)
Hermann, Gyula
Software compensation of geometric errors in coordinate measuring is hot subject because it results the decrease of manufacturing costs. The paper gives a summary of the results and achievements of earlier works on the subject. In order to improve these results a method is adapted to capture simultaneously the new coordinate frames in order use exact transformation values at discrete points of the measuring volume. The interpolation techniques published in the literature have the draw back that they could not maintain the orthogonality of the rotational part of the transformation matrices. The paper gives a technique, based on quaternions, which avoid this problem and leads to better results.
Non-Gaussian error distribution of 7Li abundance measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Houston, Stephen; Ratra, Bharat
2015-07-01
We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
50 CFR 648.323 - Accountability measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Accountability measures. 648.323 Section... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the NE Skate Complex Fisheries § 648.323 Accountability measures. (a) TAL overages. If the skate wing...
Bayesian conformity assessment in presence of systematic measurement errors
NASA Astrophysics Data System (ADS)
Carobbi, Carlo; Pennecchi, Francesca
2016-04-01
Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.
Laser measurement and analysis of reposition error in polishing systems
NASA Astrophysics Data System (ADS)
Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying
2015-10-01
In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
[Therapeutic errors and dose measuring devices].
García-Tornel, S; Torrent, M L; Sentís, J; Estella, G; Estruch, M A
1982-06-01
In order to investigate the possibilities of therapeutical error in syrups administration, authors have measured the capacity of 158 home spoons (x +/- SD). They classified spoons in four groups: group I (table spoons), 49 units (11.65 +/- 2.10 cc); group II (tea spoons), 41 units (4.70+/-1.04 cc); group III (coffee spoons), 41 units (2.60 +/- 0.59 cc), and group IV (miscellaneous), 27 units. They have compared the first three groups with theoreticals values of 15, 5 and 2.5 cc, respectively, ensuring, in the first group, significant statistical differences. In this way, they analyzed information that paediatricians receive from "vademecums", which they usually consult and have studied two points: If syrup has a meter or not, and if it indicates drug concentration or not. Only a 18% of the syrups have a meter and about 88% of the drugs indicate their concentration (mg/cc). They conclude that to prevent errors of dosage, the pharmacological industry must include meters in their products. If they haven't the safest thing is to use syringes. PMID:7125401
Inter-tester Agreement in Refractive Error Measurements
Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang
2014-01-01
Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter
NASA Astrophysics Data System (ADS)
Zhao, Xiaolong; Yang, Li
2015-10-01
Based on the theory of infrared radiation and of the infrared thermography, the mathematical correction model of the infrared radiation temperature measurement of semitransparent object is developed taking account by the effects of the atmosphere, surroundings, radiation of transmissivity and many other factors. The effects of the emissivity, transmissivity and measurement error are analysed on temperature measurement error of the infrared thermography. The measurement error of semitransparent object are compared with that of opaque object. The countermeasures to reduce the measurement error are also discussed.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545
Reducing Errors by Use of Redundancy in Gravity Measurements
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Improving Accountability through Expanded Measures of Performance
ERIC Educational Resources Information Center
Hamilton, Laura S.; Schwartz, Heather L.; Stecher, Brian M.; Steele, Jennifer L.
2013-01-01
Purpose: The purpose of this paper is to examine how test-based accountability has influenced school and district practices and explore how states and districts might consider creating expanded systems of measures to address the shortcomings of traditional accountability. It provides research-based guidance for entities that are developing or…
Measurement control administration for nuclear materials accountability
Rudy, C.R.
1991-01-31
In 1986 a measurement control program was instituted at Mound to ensure that measurement performance used for nuclear material accountability was properly monitored and documented. The organization and management of various aspects of the program are discussed. Accurate measurements are the basis of nuclear material accountability. The validity of the accountability values depends on the measurement results that are used to determine inventories, receipts, and shipments. With this measurement information, material balances are calculated to determine losses and gains of materials during a specific time period. Calculation of Inventory Differences (ID) are based on chemical or physical measurements of many items. The validity of each term is dependent on the component measurements. Thus, in Figure 1, the measured element weight of 17 g is dependent on the performance of the particular measurement system that was used. In this case, the measurement is performed using a passive gamma ray method with a calibration curve determined by measuring representative standards containing a range of special nuclear materials (Figure 2). One objective of a measurement control program is to monitor and verify the validity of the calibration curve (Figure 3). In 1986 Mound's Nuclear Materials Accountability (NMA) group instituted a formal measurement control program to ensure the validity of the numbers that comprise this equation and provide a measure of how well bulk materials can be controlled. Most measurements used for accountability are production measurements with their own quality assurance programs. In many cases a measurement control system is planned and maintained by the developers and operators of the particular measurement system with oversight by the management responsible for the results. 4 refs., 7 figs.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Measurement Validity and Accountability for Student Learning
ERIC Educational Resources Information Center
Borden, Victor M. H.; Young, John W.
2008-01-01
In this chapter, the authors focus on issues of validity in measuring student learning as a prospective indicator of institutional effectiveness. Other chapters in this volume include reference to specific approaches to measuring student learning for accountability purposes, such as through standardized tests, authentic samples of student work,…
Error analysis and data reduction for interferometric surface measurements
NASA Astrophysics Data System (ADS)
Zhou, Ping
High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.
Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill
2015-07-27
The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology. PMID:26367631
The Relative Error Magnitude in Three Measures of Change.
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.
1982-01-01
Formulas for the standard error of measurement of three measures of change (simple differences; residualized difference scores; and a measure introduced by Tucker, Damarin, and Messick) are derived. A practical guide for determining the relative error of the three measures is developed. (Author/JKS)
ERIC Educational Resources Information Center
Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik
2015-01-01
The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…
Chromosomal locus tracking with proper accounting of static and dynamic errors.
Backlund, Mikael P; Joyner, Ryan; Moerner, W E
2015-06-01
The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors. PMID:26172745
Chromosomal locus tracking with proper accounting of static and dynamic errors
NASA Astrophysics Data System (ADS)
Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.
2015-06-01
The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object's motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics ("static error") and motion blur due to finite exposure time ("dynamic error") on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors.
MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.
CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.
2004-07-05
The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Measurement of errors in clinical laboratories.
Agarwal, Rachna
2013-07-01
Laboratories have a major impact on patient safety as 80-90 % of all the diagnosis are made on the basis of laboratory tests. Laboratory errors have a reported frequency of 0.012-0.6 % of all test results. Patient safety is a managerial issue which can be enhanced by implementing active system to identify and monitor quality failures. This can be facilitated by reactive method which includes incident reporting followed by root cause analysis. This leads to identification and correction of weaknesses in policies and procedures in the system. Another way is proactive method like Failure Mode and Effect Analysis. In this focus is on entire examination process, anticipating major adverse events and pre-emptively prevent them from occurring. It is used for prospective risk analysis of high-risk processes to reduce the chance of errors in the laboratory and other patient care areas. PMID:24426216
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. PMID:26558345
NASA Astrophysics Data System (ADS)
Evin, Guillaume; Thyer, Mark; Kavetski, Dmitri; McInerney, David; Kuczera, George
2014-03-01
The paper appraises two approaches for the treatment of heteroscedasticity and autocorrelation in residual errors of hydrological models. Both approaches use weighted least squares (WLS), with heteroscedasticity modeled as a linear function of predicted flows and autocorrelation represented using an AR(1) process. In the first approach, heteroscedasticity and autocorrelation parameters are inferred jointly with hydrological model parameters. The second approach is a two-stage "postprocessor" scheme, where Stage 1 infers the hydrological parameters ignoring autocorrelation and Stage 2 conditionally infers the heteroscedasticity and autocorrelation parameters. These approaches are compared to a WLS scheme that ignores autocorrelation. Empirical analysis is carried out using daily data from 12 US catchments from the MOPEX set using two conceptual rainfall-runoff models, GR4J, and HBV. Under synthetic conditions, the postprocessor and joint approaches provide similar predictive performance, though the postprocessor approach tends to underestimate parameter uncertainty. However, the MOPEX results indicate that the joint approach can be nonrobust. In particular, when applied to GR4J, it often produces poor predictions due to strong multiway interactions between a hydrological water balance parameter and the error model parameters. The postprocessor approach is more robust precisely because it ignores these interactions. Practical benefits of accounting for error autocorrelation are demonstrated by analyzing streamflow predictions aggregated to a monthly scale (where ignoring daily-scale error autocorrelation leads to significantly underestimated predictive uncertainty), and by analyzing one-day-ahead predictions (where accounting for the error autocorrelation produces clearly higher precision and better tracking of observed data). Including autocorrelation into the residual error model also significantly affects calibrated parameter values and uncertainty estimates. The
Detecting errors and anomalies in computerized materials control and accountability databases
Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.
1998-12-31
The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.
NASA Astrophysics Data System (ADS)
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894
On modeling animal movements using Brownian motion with measurement error.
Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun
2014-02-01
Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation. PMID:24669719
Mode error analysis of impedance measurement using twin wires
NASA Astrophysics Data System (ADS)
Huang, Liang-Sheng; Yoshiro, Irie; Liu, Yu-Dong; Wang, Sheng
2015-03-01
Both longitudinal and transverse coupling impedance for some critical components need to be measured for accelerator design. The twin wires method is widely used to measure longitudinal and transverse impedance on the bench. A mode error is induced when the twin wires method is used with a two-port network analyzer. Here, the mode error is analyzed theoretically and an example analysis is given. Moreover, the mode error in the measurement is a few percent when a hybrid with no less than 25 dB isolation and a splitter with no less than 20 dB magnitude error are used. Supported by Natural Science Foundation of China (11175193, 11275221)
Accounting for data errors discovered from an audit in multiple linear regression.
Shepherd, Bryan E; Yu, Chang
2011-09-01
A data coordinating team performed onsite audits and discovered discrepancies between the data sent to the coordinating center and that recorded at sites. We present statistical methods for incorporating audit results into analyses. This can be thought of as a measurement error problem, where the distribution of errors is a mixture with a point mass at 0. If the error rate is nonzero, then even if the mean of the discrepancy between the reported and correct values of a predictor is 0, naive estimates of the association between two continuous variables will be biased. We consider scenarios where there are (1) errors in the predictor, (2) errors in the outcome, and (3) possibly correlated errors in the predictor and outcome. We show how to incorporate the error rate and magnitude, estimated from a random subset (the audited records), to compute unbiased estimates of association and proper confidence intervals. We then extend these results to multiple linear regression where multiple covariates may be incorrect in the database and the rate and magnitude of the errors may depend on study site. We study the finite sample properties of our estimators using simulations, discuss some practical considerations, and illustrate our methods with data from 2815 HIV-infected patients in Latin America, of whom 234 had their data audited using a sequential auditing plan. PMID:21281274
Gaye, Amadou; Burton, Thomas W. Y.; Burton, Paul R.
2015-01-01
Motivation: Very large studies are required to provide sufficiently big sample sizes for adequately powered association analyses. This can be an expensive undertaking and it is important that an accurate sample size is identified. For more realistic sample size calculation and power analysis, the impact of unmeasured aetiological determinants and the quality of measurement of both outcome and explanatory variables should be taken into account. Conventional methods to analyse power use closed-form solutions that are not flexible enough to cater for all of these elements easily. They often result in a potentially substantial overestimation of the actual power. Results: In this article, we describe the Estimating Sample-size and Power in R by Exploring Simulated Study Outcomes tool that allows assessment errors in power calculation under various biomedical scenarios to be incorporated. We also report a real world analysis where we used this tool to answer an important strategic question for an existing cohort. Availability and implementation: The software is available for online calculation and downloads at http://espresso-research.org. The code is freely available at https://github.com/ESPRESSO-research. Contact: louqman@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25908791
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Temperature error in radiation thermometry caused by emissivity and reflectance measurement error.
Corwin, R R; Rodenburghii, A
1994-04-01
A general expression for the temperature error caused by emissivity uncertainty is developed, and it is concluded that lower-wavelength systems provide significantly less temperature error. A technique to measure the normal emissivity is proposed that uses a normally incident light beam and an aperture to collect a portion of the energy reflected from the surface and to measure essentially both the specular component and the biangular reflectance at the edge of the aperture. The theoretical results show that the aperture size need not be substantial to provide reasonably low temperature errors for a broad class of materials and surface reflectance conditions. PMID:20885529
Using neural nets to measure ocular refractive errors: a proposal
NASA Astrophysics Data System (ADS)
Netto, Antonio V.; Ferreira de Oliveira, Maria C.
2002-12-01
We propose the development of a functional system for diagnosing and measuring ocular refractive errors in the human eye (astigmatism, hypermetropia and myopia) by automatically analyzing images of the human ocular globe acquired with the Hartmann-Schack (HS) technique. HS images are to be input into a system capable of recognizing the presence of a refractive error and outputting a measure of such an error. The system should pre-process and image supplied by the acquisition technique and then use artificial neural networks combined with fuzzy logic to extract the necessary information and output an automated diagnosis of the refractive errors that may be present in the ocular globe under exam.
Phase error compensation methods for high-accuracy profile measurement
NASA Astrophysics Data System (ADS)
Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun
2016-04-01
In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.
Measurement error in biomarkers: sources, assessment, and impact on studies.
White, Emily
2011-01-01
Measurement error in a biomarker refers to the error of a biomarker measure applied in a specific way to a specific population, versus the true (etiologic) exposure. In epidemiologic studies, this error includes not only laboratory error, but also errors (variations) introduced during specimen collection and storage, and due to day-to-day, month-to-month, and year-to-year within-subject variability of the biomarker. Validity and reliability studies that aim to assess the degree of biomarker error for use of a specific biomarker in epidemiologic studies must be properly designed to measure all of these sources of error. Validity studies compare the biomarker to be used in an epidemiologic study to a perfect measure in a group of subjects. The parameters used to quantify the error in a binary marker are sensitivity and specificity. For continuous biomarkers, the parameters used are bias (the mean difference between the biomarker and the true exposure) and the validity coefficient (correlation of the biomarker with the true exposure). Often a perfect measure of the exposure is not available, so reliability (repeatability) studies are conducted. These are analysed using kappa for binary biomarkers and the intraclass correlation coefficient for continuous biomarkers. Equations are given which use these parameters from validity or reliability studies to estimate the impact of nondifferential biomarker measurement error on the risk ratio in an epidemiologic study that will use the biomarker. Under nondifferential error, the attenuation of the risk ratio is towards the null and is often quite substantial, even for reasonably accurate biomarker measures. Differential biomarker error between cases and controls can bias the risk ratio in any direction and completely invalidate an epidemiologic study. PMID:22997860
The error analysis and online measurement of linear slide motion error in machine tools
NASA Astrophysics Data System (ADS)
Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.
2002-06-01
A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Contouring error compensation on a micro coordinate measuring machine
NASA Astrophysics Data System (ADS)
Fan, Kuang-Chao; Wang, Hung-Yu; Ye, Jyun-Kuan
2011-12-01
In recent years, three-dimensional measurements of nano-technology researches have received a great attention in the world. Based on the high accuracy demand, the error compensation of measurement machine is very important. In this study, a high precision Micro-CMM (coordinate measuring machine) has been developed which is composed of a coplanar stage for reducing the Abbé error in the vertical direction, the linear diffraction grating interferometer (LDGI) as the position feedback sensor in nanometer resolution, and ultrasonic motors for position control. This paper presents the error compensation strategy including "Home accuracy" and "Position accuracy" in both axes. For the home error compensation, we utilize a commercial DVD pick-up head and its S-curve principle to accurately search the origin of each axis. For the positioning error compensation, the absolute positions relative to the home are calibrated by laser interferometer and the error budget table is stored for feed forward error compensation. Contouring error can thus be compensated if both the compensation of both X and Y positioning errors are applied. Experiments show the contouring accuracy can be controlled to within 50nm after compensation.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Investigation of Measurement Errors in Doppler Global Velocimetry
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
1999-01-01
While the initial development phase of Doppler Global Velocimetry (DGV) has been successfully completed, there remains a critical next phase to be conducted, namely the determination of an error budget to provide quantitative bounds for measurements obtained by this technology. This paper describes a laboratory investigation that consisted of a detailed interrogation of potential error sources to determine their contribution to the overall DGV error budget. A few sources of error were obvious; e.g., iodine vapor adsorption lines, optical systems, and camera characteristics. However, additional non-obvious sources were also discovered; e.g., laser frequency and single-frequency stability, media scattering characteristics, and interference fringes. This paper describes each identified error source, its effect on the overall error budget, and where possible, corrective procedures to reduce or eliminate its effect.
Non-Gaussian Error Distributions of LMC Distance Moduli Measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Ratra, Bharat
2015-12-01
We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.
NASA Astrophysics Data System (ADS)
Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.
2015-12-01
Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Error tolerance of topological codes with independent bit-flip and measurement errors
NASA Astrophysics Data System (ADS)
Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.
2016-07-01
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Aerial measurement error with a dot planimeter: Some experimental estimates
NASA Technical Reports Server (NTRS)
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Space acceleration measurement system triaxial sensor head error budget
NASA Astrophysics Data System (ADS)
Thomas, John E.; Peters, Rex B.; Finley, Brian D.
1992-01-01
The objective of the Space Acceleration Measurement System (SAMS) is to measure and record the microgravity environment for a given experiment aboard the Space Shuttle. To accomplish this, SAMS uses remote triaxial sensor heads (TSH) that can be mounted directly on or near an experiment. The errors of the TSH are reduced by calibrating it before and after each flight. The associated error budget for the calibration procedure is discussed here.
Identification and Minimization of Errors in Doppler Global Velocimetry Measurements
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
2000-01-01
A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Measurement error caused by spatial misalignment in environmental epidemiology
Gryparis, Alexandros; Paciorek, Christopher J.; Zeka, Ariana; Schwartz, Joel; Coull, Brent A.
2009-01-01
In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area. PMID:18927119
Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior
Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor
2015-01-01
Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344
Errors Associated with the Direct Measurement of Radionuclides in Wounds
Hickman, D P
2006-03-02
Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy
Gil-Pita, Roberto
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.
Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
NASA Astrophysics Data System (ADS)
Fratini, G.; McDermitt, D. K.; Papale, D.
2013-08-01
Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (www.licor.com/eddypro).
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Errors and correction of precipitation measurements in China
NASA Astrophysics Data System (ADS)
Ren, Zhihua; Li, Mingqin
2007-05-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the “horizontal precipitation gauge” was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper. A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
Angular bias errors in three-component laser velocimeter measurements
Chen, C.Y.; Kim, P.J.; Walker, D.T.
1996-09-01
For three-component laser velocimeter systems, the change in projected area of the coincident measurement volume for different flow directions will introduce an angular bias in naturally sampled data. In this study, the effect of turbulence level and orientation of the measurement volumes on angular bias errors was examined. The operation of a typical three-component laser velocimeter was simulated using a Monte Carlo technique. Results for the specific configuration examined show that for turbulence levels less than 10% no significant bias errors in the mean velocities will occur and errors in the root-mean-square (r.m.s.) velocities will be less than 3% for all orientations. For turbulence levels less than 30%, component mean velocity bias errors less than 5% of the mean velocity vector magnitude can be attained with proper orientation of the measurement volume; however, the r.m.s. velocities may be in error as much as 10%. For turbulence levels above 50%, there is no orientation which will yield accurate estimates of all three mean velocities; component mean velocity errors as large as 15% of the mean velocity vector magnitude may be encountered.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835
Electrochemically modulated separations for material accountability measurements
Hazelton, Sandra G.; Liezers, Martin; Naes, Benjamin E.; Arrigo, Leah M.; Duckworth, Douglas C.
2012-07-08
A method for the accurate and timely analysis of accountable materials is critical for safeguards measurements in nuclear fuel reprocessing plants. Non-destructive analysis (NDA) methods, such as gamma spectroscopy, are desirable for their ability to produce near real-time data. However, the high gamma background of the actinides and fission products in spent nuclear fuel limits the use of NDA for real-time online measurements. A simple approach for at-line separation of materials would facilitate the use of at-line detection methods. A promising at-line separation method for plutonium and uranium is electrochemically modulated separations (EMS). Using an electrochemical cell with an anodized glassy carbon electrode, Pu and U oxidation states can be altered by applying an appropriate voltage. Because the affinity of the actinides for the electrode depends on their oxidation states, selective deposition can be turned “on” and “off” with changes in the applied target electrode voltage. A high surface-area cell was designed in house for the separation of Pu from spent nuclear fuel. The cell is shown to capture over 1 µg of material, increasing the likelihood for gamma spectroscopic detection of Pu extracted from dissolver solutions. The large surface area of the electrode also reduces the impact of competitive interferences from some fission products. Flow rates of up to 1 mL min-1 with >50% analyte deposition efficiency are possible, allowing for rapid separations to be effected. Results from the increased surface-area EMS cell are presented, including dilute dissolver solution simulant data.
A new indirect measure of diffusion model error
Kumar, A.; Morel, J. E.; Adams, M. L.
2013-07-01
We define a new indirect measure of the diffusion model error called the diffusion model error source. When this model error source is added to the diffusion equation, the transport solution for the angular-integrated intensity is obtained. This source represents a means by which a transport code can be used to generate information relating to the adequacy of diffusion theory for any given problem without actually solving the diffusion equation. The generation of this source does not relate in any way to acceleration of the iterative convergence of transport solutions. Perhaps the most well-known indirect measure of the diffusion model error is the variable-Eddington tensor. This tensor provides a great deal of information about the angular dependence of the angular intensity solution, but it is not always simple to interpret. In contrast, our diffusion model error source is a scalar that is conceptually easy to understand. In addition to defining the diffusion model error source analytically, we show how to generate this source numerically relative to the S{sub n} radiative transfer equations with linear-discontinuous spatial discretization. This numerical source is computationally tested and shown to reproduce the Sn solution for a Marshak-wave problem. (authors)
Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements
Majewski, M.S.
1997-01-01
Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.
50 CFR 648.103 - Summer flounder accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Summer flounder accountability measures... Management Measures for the Summer Flounder Fisheries § 648.103 Summer flounder accountability measures. (a... subsequent single fishing year recreational sector ACT. (d) Non-landing accountability measures, by...
NASA Astrophysics Data System (ADS)
Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael
2007-03-01
In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.
Objective and Subjective Refractive Error Measurements in Monkeys
Hung, Li-Fang; Ramamirtham, Ramkumar; Wensveen, Janice M.; Harwerth, Ronald S.; Smith, Earl L.
2011-01-01
Purpose To better understand the functional significance of refractive-error measures obtained using common objective methods in laboratory animals, we compared objective and subjective measures of refractive error in adolescent rhesus monkeys. Methods The subjects were 20 adolescent monkeys. Spherical-equivalent spectacle-plane refractive corrections were measured by retinoscopy and autorefraction while the animals were cyclopleged and anesthetized. The eye’s axial dimensions were measured by A-Scan ultrasonography. Subjective measures of the eye’s refractive state, with and without cycloplegia, were obtained using psychophysical methods. Specifically, we measured spatial contrast sensitivity as a function of spectacle lens power for relatively high spatial frequency gratings. The lens power that produced the highest contrast sensitivity was taken as the subjective refraction. Results Retinoscopy and autorefraction consistently yielded higher amounts of hyperopia relative to subjective measurements obtained with or without cycloplegia. The subjective refractions were not affected by cycloplegia and on average were 1.42 ± 0.61 D and 1.24 ± 0.62 D less hyperopic than the retinoscopy and autorefraction measurements, respectively. Repeating the retinoscopy and subjective measurements through 3 mm artificial pupils produced similar differences. Conclusions The results show that commonly used objective methods for assessing refractive errors in monkeys significantly overestimate the degree of hyperopia. It is likely that multiple factors contributed to the hyperopic bias associated with these objective measurements. However, the magnitude of the hyperopic bias was in general agreement with the “small-eye artifact” of retinoscopy. PMID:22198796
Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware
NASA Technical Reports Server (NTRS)
Winnitoy, Susan
2012-01-01
measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.
Wave-front measurement errors from restricted concentric subdomains.
Goldberg, K A; Geary, K
2001-09-01
In interferometry and optical testing, system wave-front measurements that are analyzed on a restricted subdomain of the full pupil can include predictable systematic errors. In nearly all cases, the measured rms wave-front error and the magnitudes of the individual aberration polynomial coefficients underestimate the wave-front error magnitudes present in the full-pupil domain. We present an analytic method to determine the relationships between the coefficients of aberration polynomials defined on the full-pupil domain and those defined on a restricted concentric subdomain. In this way, systematic wave-front measurement errors introduced by subregion selection are investigated. Using vector and matrix representations for the wave-front aberration coefficients, we generalize the method to the study of arbitrary input wave fronts and subdomain sizes. While wave-front measurements on a restricted subdomain are insufficient for predicting the wave front of the full-pupil domain, studying the relationship between known full-pupil wave fronts and subdomain wave fronts allows us to set subdomain size limits for arbitrary measurement fidelity. PMID:11551047
Optimal measurement strategies for effective suppression of drift errors.
Yashchuk, Valeriy V
2009-11-01
Drifting of experimental setups with change in temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental setups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler. PMID:19947751
Optimal measurement strategies for effective suppression of drift errors
Yashchuk, Valeriy V.
2009-04-16
Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.
Estimation of discretization errors in contact pressure measurements.
Fregly, Benjamin J; Sawyer, W Gregory
2003-04-01
Contact pressure measurements in total knee replacements are often made using a discrete sensor such as the Tekscan K-Scan sensor. However, no method currently exists for predicting the magnitude of sensor discretization errors in contact force, peak pressure, average pressure, and contact area, making it difficult to evaluate the accuracy of such measurements. This study identifies a non-dimensional area variable, defined as the ratio of the number of perimeter elements to the total number of elements with pressure, which can be used to predict these errors. The variable was evaluated by simulating discrete pressure sensors subjected to Hertzian and uniform pressure distributions with two different calibration procedures. The simulations systematically varied the size of the sensor elements, the contact ellipse aspect ratio, and the ellipse's location on the sensor grid. In addition, contact pressure measurements made with a K-Scan sensor on four different total knee designs were used to evaluate the magnitude of discretization errors under practical conditions. The simulations predicted a strong power law relationship (r(2)>0.89) between worst-case discretization errors and the proposed non-dimensional area variable. In the total knee experiments, predicted discretization errors were on the order of 1-4% for contact force and peak pressure and 3-9% for average pressure and contact area. These errors are comparable to those arising from inserting a sensor into the joint space or truncating pressures with pressure sensitive film. The reported power law regression coefficients provide a simple way to estimate the accuracy of experimental measurements made with discrete pressure sensors when the contact patch is approximately elliptical. PMID:12600352
The effect of measurement error on surveillance metrics
Weaver, Brian Phillip; Hamada, Michael S.
2012-04-24
The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.
Three Approximations of Standard Error of Measurement: An Empirical Approach.
ERIC Educational Resources Information Center
Garvin, Alfred D.
Three successively simpler formulas for approximating the standard error of measurement were derived by applying successively more simplifying assumptions to the standard formula based on the standard deviation and the Kuder-Richardson formula 20 estimate of reliability. The accuracy of each of these three formulas, with respect to the standard…
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. PMID:27416840
Comparing measurement errors for formants in synthetic and natural vowels.
Shadle, Christine H; Nam, Hosung; Whalen, D H
2016-02-01
The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555
Error Correction for Foot Clearance in Real-Time Measurement
NASA Astrophysics Data System (ADS)
Wahab, Y.; Bakar, N. A.; Mazalan, M.
2014-04-01
Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.
Errors in ellipsometry measurements made with a photoelastic modulator
Modine, F.A.; Jellison, G.E. Jr; Gruzalski, G.R.
1983-07-01
The equations governing ellipsometry measurements made with a photoelastic modulator are presented in a simple but general form. These equations are used to study the propagation of both systematic and random errors, and an assessment of the accuracy of the ellipsometer is made. A basis is provided for choosing among various ellipsommeter configurations, measurement procedures, and methods of data analysis. Several new insights into the performance of this type of ellipsometer are supplied.
Effects of measurement errors on microwave antenna holography
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Rahmat-Samii, Yahya
1991-01-01
The effects of measurement errors appearing during the implementation of the microwave holographic technique are investigated in detail, and many representative results are presented based on computer simulations. The numerical results are tailored for cases applicable to the utilization of the holographic technique for the NASA's Deep Space Network antennas, although the methodology of analysis is applicable to any antenna. Many system measurement topics are presented and summarized.
Error reduction in gamma-spectrometric measurements of nuclear materials enrichment
NASA Astrophysics Data System (ADS)
Zaplatkina, D.; Semenov, A.; Tarasova, E.; Zakusilov, V.; Kuznetsov, M.
2016-06-01
The paper provides the analysis of the uncertainty in determining the uranium samples enrichment using non-destructive methods to ensure the functioning of the nuclear materials accounting and control system. The measurements were performed by a scintillation detector based on a sodium iodide crystal and the semiconductor germanium detector. Samples containing uranium oxide of different masses were used for the measurements. Statistical analysis of the results showed that the maximum enrichment error in a scintillation detector measurement can reach 82%. The bias correction, calculated from the data obtained by the semiconductor detector, reduces the error in the determination of uranium enrichment by 47.2% in average. Thus, the use of bias correction, calculated by the statistical methods, allows the use of scintillation detectors to account and control nuclear materials.
Estimation of coherent error sources from stabilizer measurements
NASA Astrophysics Data System (ADS)
Orsucci, Davide; Tiersch, Markus; Briegel, Hans J.
2016-04-01
In the context of measurement-based quantum computation a way of maintaining the coherence of a graph state is to measure its stabilizer operators. Aside from performing quantum error correction, it is possible to exploit the information gained from these measurements to characterize and then counteract a coherent source of errors; that is, to determine all the parameters of an error channel that applies a fixed—but unknown—unitary operation to the physical qubits. Such a channel is generated, e.g., by local stray fields that act on the qubits. We study the case in which each qubit of a given graph state may see a different error channel and we focus on channels given by a rotation on the Bloch sphere around either the x ̂, the y ̂, or the z ̂ axis, for which analytical results can be given in a compact form. The possibility of reconstructing the channels at all qubits depends nontrivially on the topology of the graph state. We prove via perturbation methods that the reconstruction process is robust and supplement the analytic results with numerical evidence.
NASA Astrophysics Data System (ADS)
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Moroni, Rossana; Blomstedt, Paul; Wilhelm, Lars; Reinikainen, Tapani; Sippola, Erkki; Corander, Jukka
2010-10-10
Headspace gas chromatographic measurements of ethanol content in blood specimens from suspect drunk drivers are routinely carried out in forensic laboratories. In the widely established standard statistical framework, measurement errors in such data are represented by Gaussian distributions for the population of blood specimens at any given level of ethanol content. It is known that the variance of measurement errors increases as a function of the level of ethanol content and the standard statistical approach addresses this issue by replacing the unknown population variances by estimates derived from large sample using a linear regression model. Appropriate statistical analysis of the systematic and random components in the measurement errors is necessary in order to guarantee legally sound security corrections reported to the police authority. Here we address this issue by developing a novel statistical approach that takes into account any potential non-linearity in the relationship between the level of ethanol content and the variability of measurement errors. Our method is based on standard non-parametric kernel techniques for density estimation using a large database of laboratory measurements for blood specimens. Furthermore, we address also the issue of systematic errors in the measurement process by a statistical model that incorporates the sign of the error term in the security correction calculations. Analysis of a set of certified reference materials (CRMs) blood samples demonstrates the importance of explicitly handling the direction of the systematic errors in establishing the statistical uncertainty about the true level of ethanol content. Use of our statistical framework to aid quality control in the laboratory is also discussed. PMID:20494532
Surface measurement errors using commercial scanning white light interferometers
NASA Astrophysics Data System (ADS)
Gao, F.; Leach, R. K.; Petzing, J.; Coupland, J. M.
2008-01-01
This paper examines the performance of commercial scanning white light interferometers in a range of measurement tasks. A step height artefact is used to investigate the response of the instruments at a discontinuity, while gratings with sinusoidal and rectangular profiles are used to investigate the effects of surface gradient and spatial frequency. Results are compared with measurements made with tapping mode atomic force microscopy and discrepancies are discussed with reference to error mechanisms put forward in the published literature. As expected, it is found that most instruments report errors when used in regions close to a discontinuity or those with a surface gradient that is large compared to the acceptance angle of the objective lens. Amongst other findings, however, we report systematic errors that are observed when the surface gradient is considerably smaller. Although these errors are typically less than the mean wavelength, they are significant compared to the vertical resolution of the instrument and indicate that current scanning white light interferometers should be used with some caution if sub-wavelength accuracy is required.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Putting reward in art: A tentative prediction error account of visual art
Van de Cruys, Sander; Wagemans, Johan
2011-01-01
The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260
Systematic errors in precipitation measurements with different rain gauge sensors
NASA Astrophysics Data System (ADS)
Sungmin, O.; Foelsche, Ulrich
2015-04-01
Ground-level rain gauges provide the most direct measurement of precipitation and therefore such precipitation measurement datasets are often utilized for the evaluation of precipitation estimates via remote sensing and in climate model simulations. However, measured precipitation by means of national standard gauge networks is constrained by their spatial density. For this reason, in order to accurately measure precipitation it is of essential importance to understand the performance and reliability of rain gauges. This study is aimed to assess the systematic errors between measurements taken with different rain gauge sensors. We will mainly address extreme precipitation events as these are connected with high uncertainties in the measurements. Precipitation datasets for the study are available from WegenerNet, a dense network of 151 meteorological stations within an area of about 20 km × 15 km centred near the city of Feldbach in the southeast of Austria. The WegenerNet has a horizontal resolution of about 1.4-km and employs 'tripping bucket' rain gauges for precipitation measurements with three different types of sensors; a reference station provides measurements from all types of sensors. The results will illustrate systematic errors via the comparison of the precipitation datasets gained with different types of sensors. The analyses will be carried out by direct comparison between the datasets from the reference station. In addition, the dependence of the systematic errors on meteorological conditions, e.g. precipitation intensity and wind speed, will be investigated to assess the feasibility of applying the WegenerNet datasets for the study of extreme precipitation events. The study can be regarded as a pre-processing research to further studies in hydro-meteorological applications, which require high-resolution precipitation datasets, such as satellite/radar-derived precipitation validation and hydrodynamic modelling.
Minimax Mean-Squared Error Location Estimation Using TOA Measurements
NASA Astrophysics Data System (ADS)
Shen, Chih-Chang; Chang, Ann-Chen
This letter deals with mobile location estimation based on a minimax mean-squared error (MSE) algorithm using time-of-arrival (TOA) measurements for mitigating the nonline-of-sight (NLOS) effects in cellular systems. Simulation results are provided for illustrating the minimax MSE estimator yields good performance than the other least squares and weighted least squares estimators under relatively low signal-to-noise ratio and moderately NLOS conditions.
Detecting correlated errors in state-preparation-and-measurement tomography
NASA Astrophysics Data System (ADS)
Jackson, Christopher; van Enk, S. J.
2015-10-01
Whereas in standard quantum-state tomography one estimates an unknown state by performing various measurements with known devices, and whereas in detector tomography one estimates the positive-operator-valued-measurement elements of a measurement device by subjecting to it various known states, we consider here the case of SPAM (state preparation and measurement) tomography where neither the states nor the measurement device are assumed known. For d -dimensional systems measured by d -outcome detectors, we find there are at most d2(d2-1 ) "gauge" parameters that can never be determined by any such experiment, irrespective of the number of unknown states and unknown devices. For the case d =2 we find gauge-invariant quantities that can be accessed directly experimentally and that can be used to detect and describe SPAM errors. In particular, we identify conditions whose violations detect the presence of correlations between SPAM errors. From the perspective of SPAM tomography, standard quantum-state tomography and detector tomography are protocols that fix the gauge parameters through the assumption that some set of fiducial measurements is known or that some set of fiducial states is known, respectively.
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
ERIC Educational Resources Information Center
Steinhauser, Marco; Maier, Martin; Hubner, Ronald
2008-01-01
The present study investigated the mechanisms underlying error detection in the error signaling response. The authors tested between a response monitoring account and a conflict monitoring account. By implementing each account within the neural network model of N. Yeung, M. M. Botvinick, and J. D. Cohen (2004), they demonstrated that both accounts…
Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F
2015-11-01
Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934
Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework
Singh, Hardeep; Sittig, Dean F
2015-01-01
Diagnostic errors are major contributors to harmful patient outcomes, yet they remain a relatively understudied and unmeasured area of patient safety. Although they are estimated to affect about 12 million Americans each year in ambulatory care settings alone, both the conceptual and pragmatic scientific foundation for their measurement is under-developed. Health care organizations do not have the tools and strategies to measure diagnostic safety and most have not integrated diagnostic error into their existing patient safety programs. Further progress toward reducing diagnostic errors will hinge on our ability to overcome measurement-related challenges. In order to lay a robust groundwork for measurement and monitoring techniques to ensure diagnostic safety, we recently developed a multifaceted framework to advance the science of measuring diagnostic errors (The Safer Dx framework). In this paper, we describe how the framework serves as a conceptual foundation for system-wide safety measurement, monitoring and improvement of diagnostic error. The framework accounts for the complex adaptive sociotechnical system in which diagnosis takes place (the structure), the distributed process dimensions in which diagnoses evolve beyond the doctor's visit (the process) and the outcomes of a correct and timely “safe diagnosis” as well as patient and health care outcomes (the outcomes). We posit that the Safer Dx framework can be used by a variety of stakeholders including researchers, clinicians, health care organizations and policymakers, to stimulate both retrospective and more proactive measurement of diagnostic errors. The feedback and learning that would result will help develop subsequent interventions that lead to safer diagnosis, improved value of health care delivery and improved patient outcomes. PMID:25589094
Uncertainty in measurement and total error - are they so incompatible?
Farrance, Ian; Badrick, Tony; Sikaris, Kenneth A
2016-08-01
There appears to be a growing debate with regard to the use of "Westgard style" total error and "GUM style" uncertainty in measurement. Some may argue that the two approaches are irreconcilable. The recent appearance of an article "Quality goals at the crossroads: growing, going, or gone" on the well-regarded Westgard Internet site requires some comment. In particular, a number of assertions which relate to ISO 15189 and uncertainty in measurement appear misleading. An alternate view of the key issues raised by Westergard may serve to guide and enlighten others who may accept such statements at face value. PMID:27227711
Considering Measurement Model Parameter Errors in Static and Dynamic Systems
NASA Astrophysics Data System (ADS)
Woodbury, Drew P.; Majji, Manoranjan; Junkins, John L.
2011-07-01
In static systems, state values are estimated using traditional least squares techniques based on a redundant set of measurements. Inaccuracies in measurement model parameter estimates can lead to significant errors in the state estimates. This paper describes a technique that considers these parameters in a modified least squares framework. It is also shown that this framework leads to the minimum variance solution. Both batch and sequential (recursive) least squares methods are described. One static system and one dynamic system are used as examples to show the benefits of the consider least squares methodology.
50 CFR 648.293 - Tilefish accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...
50 CFR 648.293 - Tilefish accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Tilefish accountability measures. 648.293 Section 648.293 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Tilefish Fishery § 648.293 Tilefish accountability measures. (a) If the ACL is...
50 CFR 648.143 - Black sea bass Accountability Measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a... based on dealer reports, state data, and other available information. All black sea bass landed for...
50 CFR 648.143 - Black sea bass Accountability Measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a... based on dealer reports, state data, and other available information. All black sea bass landed for...
50 CFR 648.143 - Black sea bass Accountability Measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Black sea bass Accountability Measures... Management Measures for the Black Sea Bass Fishery § 648.143 Black sea bass Accountability Measures. (a... based on dealer reports, state data, and other available information. All black sea bass landed for...
50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...
50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... quota described in § 648.232 will be harvested and shall close the EEZ to fishing for spiny dogfish...
50 CFR 648.233 - Spiny dogfish Accountability Measures (AMs).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Spiny dogfish Accountability Measures... Management Measures for the Spiny Dogfish Fishery § 648.233 Spiny dogfish Accountability Measures (AMs). (a... dogfish on that date for the remainder of that semi-annual period by publishing notification in...
50 CFR 648.123 - Scup accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Scup accountability measures. 648.123 Section 648.123 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period...
50 CFR 648.123 - Scup accountability measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Scup accountability measures. 648.123 Section 648.123 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period...
50 CFR 648.123 - Scup accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Scup accountability measures. 648.123 Section 648.123 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Scup Fishery § 648.123 Scup accountability measures. (a) Commercial sector period...
Error reduction techniques for measuring long synchrotron mirrors
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
Factors Affecting Blood Glucose Monitoring: Sources of Errors in Measurement
Ginsberg, Barry H.
2009-01-01
Glucose monitoring has become an integral part of diabetes care but has some limitations in accuracy. Accuracy may be limited due to strip manufacturing variances, strip storage, and aging. They may also be due to limitations on the environment such as temperature or altitude or to patient factors such as improper coding, incorrect hand washing, altered hematocrit, or naturally occurring interfering substances. Finally, exogenous interfering substances may contribute errors to the system evaluation of blood glucose. In this review, I discuss the measurement of error in blood glucose, the sources of error, and their mechanism and potential solutions to improve accuracy in the hands of the patient. I also discuss the clinical measurement of system accuracy and methods of judging the suitability of clinical trials and finally some methods of overcoming the inaccuracies. I have included comments about additional information or education that could be done today by manufacturers in the appropriate sections. Areas that require additional work are discussed in the final section. PMID:20144340
Error analysis and modeling for the time grating length measurement system
NASA Astrophysics Data System (ADS)
Gao, Zhonghua; Fen, Jiqin; Zheng, Fangyan; Chen, Ziran; Peng, Donglin; Liu, Xiaokang
2013-10-01
Through analyzing errors of the length measurement system in which a linear time grating was the principal measuring component, we found that the study on the error law was very important to reduce system errors and optimize the system structure. Mainly error sources in the length measuring system, including the time grating sensor, slide way, and cantilever, were studied; and therefore total errors were obtained. Meanwhile we erected the mathematic model of errors of the length measurement system. Using the error model, we calibrated system errors being in the length measurement system. Also, we developed a set of experimental devices in which a laser interferometer was used to calibrate the length measurement system errors. After error calibrating, the accuracy of the measurement system was improved from original 36um/m to 14um/m. The fact that experiment results are consistent with the simulation results shows that the error mathematic model is suitable for the length measuring system.
50 CFR 622.49 - Accountability measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF, AND SOUTH ATLANTIC Management Measures.... (5) Black sea bass—(i) Commercial fishery. If commercial landings, as estimated by the SRD, reach or... the recreational ACL of 409,000 lb (185,519 kg), gutted weight, and black sea bass are...
Improving optical bench radius measurements using stage error motion data
Schmitz, Tony L.; Gardner, Neil; Vaughn, Matthew; Medicus, Kate; Davies, Angela
2008-12-20
We describe the application of a vector-based radius approach to optical bench radius measurements in the presence of imperfect stage motions. In this approach, the radius is defined using a vector equation and homogeneous transformation matrix formulism. This is in contrast to the typical technique, where the displacement between the confocal and cat's eye null positions alone is used to determine the test optic radius. An important aspect of the vector-based radius definition is the intrinsic correction for measurement biases, such as straightness errors in the stage motion and cosine misalignment between the stage and displacement gauge axis, which lead to an artificially small radius value if the traditional approach is employed. Measurement techniques and results are provided for the stage error motions, which are then combined with the setup geometry through the analysis to determine the radius of curvature for a spherical artifact. Comparisons are shown between the new vector-based radius calculation, traditional radius computation, and a low uncertainty mechanical measurement. Additionally, the measurement uncertainty for the vector-based approach is determined using Monte Carlo simulation and compared to experimental results.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval
ERIC Educational Resources Information Center
Beauducel, Andre
2013-01-01
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Data Reconciliation and Gross Error Detection: A Filtered Measurement Test
Himour, Y.
2008-06-12
Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.
Analysis of Spherical Form Errors to Coordinate Measuring Machine Data
NASA Astrophysics Data System (ADS)
Chen, Mu-Chen
Coordinates measuring machines (CMMs) are commonly utilized to take measurement data from manufactured surfaces for inspection purposes. The measurement data are then used to evaluate the geometric form errors associated with the surface. Traditionally, the evaluation of spherical form errors involves an optimization process of fitting a substitute sphere to the sampled points. This paper proposes the computational strategies for sphericity with respect to ASME Y14.5M-1994 standard. The proposed methods consider the trade-off between the accuracy of sphericity and the efficiency of inspection. Two approaches of computational metrology based on genetic algorithms (GAs) are proposed to explore the optimality of sphericity measurements and the sphericity feasibility analysis, respectively. The proposed algorithms are verified by using several CMM data sets. Observing from the computational results, the proposed algorithms are practical for on-line implementation to the sphericity evaluation. Using the GA-based computational techniques, the accuracy of sphericity assessment and the efficiency of sphericity feasibility analysis are agreeable.
Performance-Based Measurement: Action for Organizations and HPT Accountability
ERIC Educational Resources Information Center
Larbi-Apau, Josephine A.; Moseley, James L.
2010-01-01
Basic measurements and applications of six selected general but critical operational performance-based indicators--effectiveness, efficiency, productivity, profitability, return on investment, and benefit-cost ratio--are presented. With each measurement, goals and potential impact are explored. Errors, risks, limitations to measurements, and a…
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time. PMID:19964394
Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy
NASA Astrophysics Data System (ADS)
Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid
2015-07-01
Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following:
This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
50 CFR 660.509 - Accountability measures (season closures).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...
50 CFR 660.509 - Accountability measures (season closures).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...
50 CFR 660.509 - Accountability measures (season closures).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Accountability measures (season closures... Coastal Pelagics Fisheries § 660.509 Accountability measures (season closures). (a) General rule. When the... until the beginning of the next fishing period or season. Regional Administrator shall announce in...
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions
Corlett, P. R.; Murray, G. K.; Honey, G. D.; Aitken, M. R. F.; Shanks, D. R.; Robbins, T.W.; Bullmore, E.T.; Dickinson, A.; Fletcher, P. C.
2012-01-01
Delusions are maladaptive beliefs about the world. Based upon experimental evidence that prediction error—a mismatch between expectancy and outcome—drives belief formation, this study examined the possibility that delusions form because of disrupted prediction-error processing. We used fMRI to determine prediction-error-related brain responses in 12 healthy subjects and 12 individuals (7 males) with delusional beliefs. Frontal cortex responses in the patient group were suggestive of disrupted prediction-error processing. Furthermore, across subjects, the extent of disruption was significantly related to an individual’s propensity to delusion formation. Our results support a neurobiological theory of delusion formation that implicates aberrant prediction-error signalling, disrupted attentional allocation and associative learning in the formation of delusional beliefs. PMID:17690132
NASA Astrophysics Data System (ADS)
Song, Qing; Zhang, Chunsong; Huang, Jiayong; Wu, Di; Liu, Jing
2009-11-01
The error source of the external diameter measurement system based on the double optical path parallel light projection method are the non-parallelism of the double optical path, aberration distortion of the projection lens, the edge of the projection profile of the cylinder which is affected by aperture size of the illuminating beam, light intensity variation and the counting error in the circuit. The screw pair drive is applied to achieve the up-and-down movement in the system. The precision of up-and-down movement mainly lies on the Abbe Error which is caused by the offset between the centerline and the mobile line of the capacitive-gate ruler, the heeling error of the guide mechanism, and the error which is caused by the dilatometric change of parts resulted from the temperature change. Rotary mechanism is achieved by stepper motor and gear drive. The precision of the rotary mechanism is determined by the stepping angle error of the stepper motor, the gear transmission error, and the heeling error of the piston relative to the rotation axis. The method of error modification is putting a component in the optical path to get the error curve, which is then used in the point-by-point modification by software compensation.
Effects of measurement error on estimating biological half-life
Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. )
1992-10-01
Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Errors in Potassium Measurement: A Laboratory Perspective for the Clinician
Asirvatham, Jaya R; Moses, Viju; Bjornson, Loring
2013-01-01
Errors in potassium measurement can cause pseudohyperkalemia, where serum potassium is falsely elevated. Usually, these are recognized either by the laboratory or the clinician. However, the same factors that cause pseudohyperkalemia can mask hypokalemia by pushing measured values into the reference interval. These cases require a high-index of suspicion by the clinician as they cannot be easily identified in the laboratory. This article discusses the causes and mechanisms of spuriously elevated potassium, and current recommendations to minimize those factors. “Reverse” pseudohyperkalemia and the role of correction factors are also discussed. Relevant articles were identified by a literature search performed on PubMed using the terms “pseudohyperkalemia,” “reverse pseudohyperkalemia,” “factitious hyperkalemia,” “spurious hyperkalemia,” and “masked hypokalemia.” PMID:23724399
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
Characterization of measurement error sources in Doppler global velocimetry
NASA Astrophysics Data System (ADS)
Meyers, James F.; Lee, Joseph W.; Schwartz, Richard J.
2001-04-01
Doppler global velocimetry uses the absorption characteristics of iodine vapour to provide instantaneous three-component measurements of flow velocity within a plane defined by a laser light sheet. Although the technology is straightforward, its utilization as a flow diagnostics tool requires hardening of the optical system and careful attention to detail during data acquisition and processing if routine use in wind tunnel applications is to be achieved. A development programme that reaches these goals is presented. Theoretical and experimental investigations were conducted on each technology element to determine methods that increase measurement accuracy and repeatability. Enhancements resulting from these investigations included methods to ensure iodine vapour calibration stability, single frequency operation of the laser and image alignment to sub-pixel accuracies. Methods were also developed to improve system calibration, and eliminate spatial variations of optical frequency in the laser output, spatial variations in optical transmissivity and perspective and optical distortions in the data images. Each of these enhancements is described and experimental examples given to illustrate the improved measurement performance obtained by the enhancement. The culmination of this investigation was the measured velocity profile of a rotating wheel resulting in a 1.75% error in the mean with a standard deviation of 0.5 m s-1. Comparing measurements of a jet flow with corresponding Pitot measurements validated the use of these methods for flow field applications.
Characterization of Measurement Error Sources in Doppler Global Velocimetry
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.; Schwartz, Richard J.
2001-01-01
Doppler global velocimetry uses the absorption characteristics of iodine vapor to provide instantaneous three-component measurements of flow velocity within a plane defined by a laser light sheet. Although the technology is straightforward, its utilization as a flow diagnostics tool requires hardening of the optical system and careful attention to detail during data acquisition and processing if routine use in wind tunnel applications is to be achieved. A development program that reaches these goals is presented. Theoretical and experimental investigations were conducted on each technology element to determine methods that increase measurement accuracy and repeatability. Enhancements resulting from these investigations included methods to ensure iodine vapor calibration stability, single frequency operation of the laser and image alignment to sub-pixel accuracies. Methods were also developed to improve system calibration, and eliminate spatial variations of optical frequency in the laser output, spatial variations in optical transmissivity and perspective and optical distortions in the data images. Each of these enhancements is described and experimental examples given to illustrate the improved measurement performance obtained by the enhancement. The culmination of this investigation was the measured velocity profile of a rotating wheel resulting in a 1.75% error in the mean with a standard deviation of 0.5 m/s. Comparing measurements of a jet flow with corresponding Pitot measurements validated the use of these methods for flow field applications.
Effects of measurement error on horizontal hydraulic gradient estimates.
Devlin, J F; McElwee, C D
2007-01-01
During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an approximately 500-m(2) site. Additional wells were installed to increase the monitored area to 26,500 m(2), and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of +/-1.3 x 10(-2) m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 x 10(-4)+/-25% with a flow direction of 56 degrees southeast +/-18 degrees, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis. PMID:17257340
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also
Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.
2009-01-01
We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925
The effect of systematic errors on the hybridization of optical critical dimension measurements
NASA Astrophysics Data System (ADS)
Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.
2015-06-01
In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.
ERIC Educational Resources Information Center
Ambridge, Ben; Rowland, Caroline F.; Theakston, Anna L.; Tomasello, Michael
2006-01-01
This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words ("what," "who," "how" and "why"), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6-4;6. Rates of non-inversion error ("Who she is hitting?") were found not to differ by…
Measure against Measure: Responsibility versus Accountability in Education
ERIC Educational Resources Information Center
Senechal, Diana
2013-01-01
In education policy, practice, and discussion, we find ourselves caught between responsibility--fidelity to one's experience, conscience, and discernment--and a narrow kind of accountability. In order to preserve integrity, we (educators and leaders) must maintain independence of thought while skillfully articulating our work to the outside world.…
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Horizon Sensor Errors Calculated By Computer Models Compared With Errors Measured In Orbit
NASA Astrophysics Data System (ADS)
Ward, Kenneth A.; Hogan, Roger; Andary, James
1982-06-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-7). The k)recLicted performance is compared with actual flight history.
Bradshaw, Corey J A; Sims, David W; Hays, Graeme C
2007-03-01
Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on
NASA Astrophysics Data System (ADS)
Koepke, C.; Irving, J.; Roubinet, D.
2014-12-01
Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through
NASA Astrophysics Data System (ADS)
Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka
2016-03-01
Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
NASA Technical Reports Server (NTRS)
Fulton, C. L.; Harris, R. L., Jr.
1980-01-01
Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...
Three-way partitioning of sea surface temperature measurement error
NASA Technical Reports Server (NTRS)
Chelton, D.
1983-01-01
Given any set of three 2 degree binned anomaly sea surface temperature (SST) data sets by three different sensors, estimates of the mean square error of each sensor estimate is made. The above formalism performed on every possible triplet of sensors. A separate table of error estimates is then constructed for each sensor.
Moving Beyond "Good/Bad" Student Accountability Measures: Multiple Perspectives of Accountability.
ERIC Educational Resources Information Center
Capper, Colleen A.; Hafner, Madeline M.; Keyes, Maureen W.
2001-01-01
Examines three student accountability measures (standardized tests, performance-based assessment, and structural assessment) through two different theoretical perspectives: structural functionalism and feminist poststructuralism. Educators can use various kinds of assessments in ways that maintain the status quo or support equity and justice for…
Accuracy and Repeatability of Refractive Error Measurements by Photorefractometry
Rajavi, Zhale; Sabbaghi, Hamideh; Baghini, Ahmad Shojaei; Yaseri, Mehdi; Sheibani, Koroush; Norouzi, Ghazal
2015-01-01
Purpose: To determine the accuracy of photorefraction and autorefraction as compared to cycloautorefraction and to detect the repeatability of photorefraction. Methods: This diagnostic study included the right eyes of 86 children aged 7-12 years. Refractive status was measured using photorefraction (PlusoptiX SO4, GmbH, Nürnberg, Germany) and autorefraction (Topcon RM800, USA) with and without cycloplegia. Photorefraction for each eye was performed three times to assess repeatability. Results: The overall agreement between photorefraction and cycloautorefraction was over 81% for all refractive errors. Photorefractometry had acceptable sensitivity and specificity for myopia and astigmatism. There was no statistically significant difference considering myopia and astigmatism in all comparisons, while the difference was significant for hyperopia using both amblyogenic (P = 0.006) and nonamblyogenic criteria (P = 0.001). A myopic shift of 1.21 diopter (D) and 1.58 D occurred with photorefraction in nonamblyogenic and amblyogenic hyperopia, respectively. Using revised cut-off points of + 1.12 D and + 2.6 D instead of + 2.00 D and + 3.50 D improved the sensitivity of photorefractometry to 84.62% and 69.23%, respectively. The repeatability of photorefraction for measurement of myopia, astigmatism and hyperopia was acceptable (intra-cluster correlation [ICC]: 0.98, 0.94 and 0.77, respectively). Autorefraction results were significantly different from cycloautorefraction in hyperopia (P < 0.0001), but comparable in myopia and astigmatism. Also, noncycloglegic autorefraction results were similar to photorefraction in this study. Conclusion: Although photorefraction was accurate for measurement of myopia and astigmatism, its sensitivity for hyperopia was low which could be improved by considering revised cut-off points. Considering cut-off points, photorefraction can be used as a screening method. PMID:26730305
50 CFR 648.163 - Bluefish Accountability Measures (AMs).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Bluefish Accountability Measures (AMs). 648.163 Section 648.163 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the Atlantic...
Accounting for People: Can Business Measure Human Value?
ERIC Educational Resources Information Center
Workforce Economics, 1997
1997-01-01
Traditional business practice undervalues human capital, and most conventional accounting models reflect this inclination. The argument for more explicit measurements of human resources is simple: Improved measurement of human resources will lead to more rational and productive choices about managing human resources. The business community is…
50 CFR 648.103 - Summer flounder accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Summer flounder accountability measures. 648.103 Section 648.103 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the Summer...
Predictors of Measurement Error in Energy Intake During Pregnancy
Nowicki, Eric; Siega-Riz, Anna-Maria; Herring, Amy; He, Ka; Stuebe, Alison; Olshan, Andy
2011-01-01
Nutrition plays a critical role in maternal and fetal health; however, research on error in the measurement of energy intake during pregnancy is limited. The authors analyzed data on 998 women living in central North Carolina with singleton pregnancies during 2001–2005. Second-trimester diet was assessed by food frequency questionnaire. Estimated energy requirements were calculated using Institute of Medicine prediction equations, with adjustment for energy costs during the second trimester. Implausible values for daily energy intake were determined using confidence limits of agreement for energy intake/estimated energy requirements. Prevalences of low energy reporting (LER) and high energy reporting (HER) were 32.8% and 12.9%, respectively. In a multivariable analysis, pregravid body mass index was related to both LER and HER; LER was higher in both overweight (odds ratio = 1.96, 95% confidence interval: 1.26, 3.02; P = 0.031) and obese (odds ratio = 3.29, 95% confidence interval: 2.33, 4.65; P < 0.001) women than in normal-weight counterparts. Other predictors of LER included marriage and higher levels of physical activity. HER was higher among subjects who were underweight, African-American, and less educated and subjects who had higher depressive symptom scores. LER and HER are prevalent during pregnancy. Identifying their predictors may improve data collection and analytic methods for reducing systematic bias in the study of diet and reproductive outcomes. PMID:21273398
Large-scale spatial angle measurement and the pointing error analysis
NASA Astrophysics Data System (ADS)
Xiao, Wen-jian; Chen, Zhi-bin; Ma, Dong-xi; Zhang, Yong; Liu, Xian-hong; Qin, Meng-ze
2016-05-01
A large-scale spatial angle measurement method is proposed based on inertial reference. Common measurement reference is established in inertial space, and the spatial vector coordinates of each measured axis in inertial space are measured by using autocollimation tracking and inertial measurement technology. According to the spatial coordinates of each test vector axis, the measurement of large-scale spatial angle is easily realized. The pointing error of tracking device based on the two mirrors in the measurement system is studied, and the influence of different installation errors to the pointing error is analyzed. This research can lay a foundation for error allocation, calibration and compensation for the measurement system.
NDA accountability measurement needs in the DOE plutonium community
Ostenak, C.A.
1988-08-31
The purpose of this first ATEX report is to identify the twenty most vital nondestructive assay (NDA) accountability measurement needs in the DOE plutonium community to DOE and to contractor safeguards RandD managers in order to promote resolution of these needs. During 1987, ATEX identified sixty NDA accountability measurement problems, many of which were common to each of the DOE sites considered. These sixty problems were combined into twenty NDA accountability measurement needs that exist within five major areas: NDA ''standards'' representing various nuclear materials and matrix composition; Impure nuclear materials compounds, residues, and wastes; Product-grade nuclear materials; Nuclear materials process holdup and in-process inventory; and Nuclear materials item control and verification. 2 figs.
Electrochemically-Modulated Separations for Material Accountability Measurements
Arrigo, Leah M.; Liezers, Martin; Douglas, Matthew; Green, Michael A.; Farmer, Orville T.; Schwantes, Jon M.; Peper, Shane M.; Duckworth, Douglas C.
2010-05-07
The Safeguards community recognizes that an accurate and timely measurement of accountable material mass at the head-end of the facility is critical to a modern materials control and accountability program at fuel reprocessing plants. For material accountancy, it is critical to detect both acute and chronic diversions of nuclear materials. Therefore, both on-line nondestructive (NDA) and destructive analysis (DA) approaches are desirable. Current methods for DA involve grab sampling and laboratory based column extractions that are costly, hazardous, and time consuming. Direct on-line gamma measurements of Pu, while desirable, are not possible due to contributions from other actinide and fission products. A technology for simple, online separation of targeted materials would benefit both DA and NDA measurements.
Implications of Three Causal Models for the Measurement of Halo Error.
ERIC Educational Resources Information Center
Fisicaro, Sebastiano A.; Lance, Charles E.
1990-01-01
Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)
Error analysis of Raman differential absorption lidar ozone measurements in ice clouds.
Reichardt, J
2000-11-20
A formalism for the error treatment of lidar ozone measurements with the Raman differential absorption lidar technique is presented. In the presence of clouds wavelength-dependent multiple scattering and cloud-particle extinction are the main sources of systematic errors in ozone measurements and necessitate a correction of the measured ozone profiles. Model calculations are performed to describe the influence of cirrus and polar stratospheric clouds on the ozone. It is found that it is sufficient to account for cloud-particle scattering and Rayleigh scattering in and above the cloud; boundary-layer aerosols and the atmospheric column below the cloud can be neglected for the ozone correction. Furthermore, if the extinction coefficient of the cloud is ?0.1 km(-1), the effect in the cloud is proportional to the effective particle extinction and to a particle correction function determined in the limit of negligible molecular scattering. The particle correction function depends on the scattering behavior of the cloud particles, the cloud geometric structure, and the lidar system parameters. Because of the differential extinction of light that has undergone one or more small-angle scattering processes within the cloud, the cloud effect on ozone extends to altitudes above the cloud. The various influencing parameters imply that the particle-related ozone correction has to be calculated for each individual measurement. Examples of ozone measurements in cirrus clouds are discussed. PMID:18354611
On error sources during airborne measurements of the ambient electric field
NASA Technical Reports Server (NTRS)
Evteev, B. F.
1991-01-01
The principal sources of errors during airborne measurements of the ambient electric field and charge are addressed. Results of their analysis are presented for critical survey. It is demonstrated that the volume electric charge has to be accounted for during such measurements, that charge being generated at the airframe and wing surface by droplets of clouds and precipitation colliding with the aircraft. The local effect of that space charge depends on the flight regime (air speed, altitude, particle size, and cloud elevation). Such a dependence is displayed in the relation between the collector conductivity of the aircraft discharging circuit - on one hand, and the sum of all the residual conductivities contributing to aircraft discharge - on the other. Arguments are given in favor of variability in the aircraft electric capacitance. Techniques are suggested for measuring from factors to describe the aircraft charge.
Dayem, H.A.; Ostenak, C.A.; Gutmacher, R.G.; Kern, E.A.; Markin, J.T.; Martinez, D.P.; Thomas, C.C. Jr.
1982-07-01
This report describes the conceptual design of a materials accounting system for the feed preparation and chemical separations processes of a fast breeder reactor spent-fuel reprocessing facility. For the proposed accounting system, optimization techniques are used to calculate instrument measurement uncertainties that meet four different accounting performance goals while minimizing the total development cost of instrument systems. We identify instruments that require development to meet performance goals and measurement uncertainty components that dominate the materials balance variance. Materials accounting in the feed preparation process is complicated by large in-process inventories and spent-fuel assembly inputs that are difficult to measure. To meet 8 kg of plutonium abrupt and 40 kg of plutonium protracted loss-detection goals, materials accounting in the chemical separations process requires: process tank volume and concentration measurements having a precision less than or equal to 1%; accountability and plutonium sample tank volume measurements having a precision less than or equal to 0.3%, a shortterm correlated error less than or equal to 0.04%, and a long-term correlated error less than or equal to 0.04%; and accountability and plutonium sample tank concentration measurements having a precision less than or equal to 0.4%, a short-term correlated error less than or equal to 0.1%, and a long-term correlated error less than or equal to 0.05%. The effects of process design on materials accounting are identified. Major areas of concern include the voloxidizer, the continuous dissolver, and the accountability tank.
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2014-08-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.
Evidence, exaggeration, and error in historical accounts of chaparral wildfires in California.
Goforth, Brett R; Minnich, Richard A
2007-04-01
For more than half a century, ecologists and historians have been integrating the contemporary study of ecosystems with data gathered from historical sources to evaluate change over broad temporal and spatial scales. This approach is especially useful where ecosystems were altered before formal study as a result of natural resources management, land development, environmental pollution, and climate change. Yet, in many places, historical documents do not provide precise information, and pre-historical evidence is unavailable or has ambiguous interpretation. There are similar challenges in evaluating how the fire regime of chaparral in California has changed as a result of fire suppression management initiated at the beginning of the 20th century. Although the firestorm of October 2003 was the largest officially recorded in California (approximately 300,000 ha), historical accounts of pre-suppression wildfires have been cited as evidence that such a scale of burning was not unprecedented, suggesting the fire regime and patch mosaic in chaparral have not substantially changed. We find that the data do not support pre-suppression megafires, and that the impression of large historical wildfires is a result of imprecision and inaccuracy in the original reports, as well as a parlance that is beset with hyperbole. We underscore themes of importance for critically analyzing historical documents to evaluate ecological change. A putative 100 mile long by 10 mile wide (160 x 16 km) wildfire reported in 1889 was reconstructed to an area of chaparral approximately 40 times smaller by linking local accounts to property tax records, voter registration rolls, claimed insurance, and place names mapped with a geographical information system (GIS) which includes data from historical vegetation surveys. We also show that historical sources cited as evidence of other large chaparral wildfires are either demonstrably inaccurate or provide anecdotal information that is immaterial in the
Colloquium: Quantum root-mean-square error and measurement uncertainty relations
NASA Astrophysics Data System (ADS)
Busch, Paul; Lahti, Pekka; Werner, Reinhard F.
2014-10-01
Recent years have witnessed a controversy over Heisenberg's famous error-disturbance relation. Here the conflict is resolved by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. Two approaches to adapting the classic notion of root-mean-square error to quantum measurements are discussed. One is based on the concept of a noise operator; its natural operational content is that of a mean deviation of the values of two observables measured jointly, and thus its applicability is limited to cases where such joint measurements are available. The second error measure quantifies the differences between two probability distributions obtained in separate runs of measurements and is of unrestricted applicability. We show that there are no nontrivial unconditional joint-measurement bounds for state-dependent errors in the conceptual framework discussed here, while Heisenberg-type measurement uncertainty relations for state-independent errors have been proven.
A heteroscedastic measurement error model for method comparison data with replicate measurements.
Nawarathna, Lakshika S; Choudhary, Pankaj K
2015-03-30
Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. PMID:25614299
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Measurement of four-degree-of-freedom error motions based on non-diffracting beam
NASA Astrophysics Data System (ADS)
Zhai, Zhongsheng; Lv, Qinghua; Wang, Xuanze; Shang, Yiyuan; Yang, Liangen; Kuang, Zheng; Bennett, Peter
2016-05-01
A measuring method for the determination of error motions of linear stages based on non-diffracting beams (NDB) is presented. A right-angle prism and a beam splitter are adopted as the measuring head, which is fixed on the moving stage in order to sense the straightness and angular errors. Two CCDs are used to capture the NDB patterns that are carrying the errors. Four different types error s, the vertical straightness error and three rotational errors (the pitch, roll and yaw errors), can be separated and distinguished through theoretical analysis of the shift in the centre positions in the two cameras. Simulation results show that the proposed method using NDB can measure four-degrees-of-freedom errors for the linear stage.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, W. S.; Burkhart, J. F.; Kylling, A.
2015-08-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
50 CFR 648.24 - Fishery closures and accountability measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Fishery closures and accountability measures. 648.24 Section 648.24 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED...
50 CFR 648.24 - Fishery closures and accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Fishery closures and accountability measures. 648.24 Section 648.24 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED...
50 CFR 648.24 - Fishery closures and accountability measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Fishery closures and accountability measures. 648.24 Section 648.24 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED...
Adapting Accountability Systems to the Limitations of Educational Measurement
ERIC Educational Resources Information Center
Kane, Michael
2015-01-01
Michael Kane writes in this article that he is in more or less complete agreement with Professor Koretz's characterization of the problem outlined in the paper published in this issue of "Measurement." Kane agrees that current testing practices are not adequate for test-based accountability (TBA) systems, but he writes that he is far…
Neutron-induced soft error rate measurements in semiconductor memories
NASA Astrophysics Data System (ADS)
Ünlü, Kenan; Narayanan, Vijaykrishnan; Çetiner, Sacit M.; Degalahal, Vijay; Irwin, Mary J.
2007-08-01
Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed.
Pustovitov, V. D.
2008-01-15
The possibility is discussed of determining the amplitude and phase of a static resonant error field in a tokamak by means of dynamic magnetic measurements. The method proposed assumes measuring the plasma response to a varying external helical magnetic field with a small (a few gauss) amplitude. The case is considered in which the plasma is probed by square pulses with a duration much longer than the time of the transition process. The plasma response is assumed to be linear, with a proportionality coefficient being dependent on the plasma state. The analysis is carried out in a standard cylindrical approximation. The model is based on Maxwell's equations and Ohm's law and is thus capable of accounting for the interaction of large-scale modes with the conducting wall of the vacuum chamber. The method can be applied to existing tokamaks.
Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.
2012-04-24
We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.
Total error vs. measurement uncertainty: revolution or evolution?
Oosterhuis, Wytze P; Theodorsson, Elvar
2016-02-01
The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution. PMID:26540227
Canonical Correlation Analysis that Incorporates Measurement and Sampling Error Considerations.
ERIC Educational Resources Information Center
Thompson, Bruce; Daniel, Larry
Multivariate methods are being used with increasing frequency in educational research because these methods control "experimentwise" error rate inflation, and because the methods best honor the nature of the reality to which the researcher wishes to generalize. This paper: explains the basic logic of canonical analysis; illustrates that canonical…
Errors of Measurement and Standard Setting in Mastery Testing.
ERIC Educational Resources Information Center
Kane, Michael; Wilson, Jennifer
This paper evaluates the magnitude of the total error in estimates of the difference between an examinee's domain score and the cutoff score. An observed score based on a random sample of items from the domain, and an estimated cutoff score derived from a judgmental standard setting procedure are assumed. The work of Brennan and Lockwood (1980) is…
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
La Haye, R.J.
1997-02-01
The existing theoretical and experimental basis for predicting the levels of resonant static error field at different components m,n that stop plasma rotation and produce a locked mode is reviewed. For ITER ohmic discharges, the slow rotation of the very large plasma is predicted to incur a locked mode (and subsequent disastrous large magnetic islands) at a simultaneous weighted error field ({Sigma}{sub 1}{sup 3}w{sub m1}B{sup 2}{sub rm1}){sup {1/2}}/B{sub T} {ge} 1.9 x 10{sup -5}. Here the weights w{sub m1} are empirically determined from measurements on DIII-D to be w{sub 11} = 0. 2, w{sub 21} = 1.0, and w{sub 31} = 0. 8 and point out the relative importance of different error field components. This could be greatly obviated by application of counter injected neutral beams (which adds fluid flow to the natural ohmic electron drift). The addition of 5 MW of 1 MeV beams at 45{degrees} injection would increase the error field limit by a factor of 5; 13 MW would produce a factor of 10 improvement. Co-injection beams would also be effective but not as much as counter-injection as the co direction opposes the intrinsic rotation while the counter direction adds to it. A means for measuring individual PF and TF coil total axisymmetric field error to less than 1 in 10,000 is described. This would allow alignment of coils to mm accuracy and with correction coils make possible the very low levels of error field needed.
(Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2015-12-01
Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross
ERIC Educational Resources Information Center
The Newsletter of the Comprehensive Center-Region VI, 1999
1999-01-01
Controversy surrounding the accountability movement is related to how the movement began in response to dissatisfaction with public schools. Opponents see it as one-sided, somewhat mean-spirited, and a threat to the professional status of teachers. Supporters argue that all other spheres of the workplace have accountability systems and that the…
ERIC Educational Resources Information Center
Lashway, Larry
1999-01-01
This issue reviews publications that provide a starting point for principals looking for a way through the accountability maze. Each publication views accountability differently, but collectively these readings argue that even in an era of state-mandated assessment, principals can pursue proactive strategies that serve students' needs. James A.…
An Empirical Study of the Relative Error Magnitude in Three Measures of Change.
ERIC Educational Resources Information Center
Williams, Richard H.; And Others
1984-01-01
This paper describes the procedures and results of two studies designed to yield empirical comparisons of the error magnitude in three change measures: the simple gain score, the residualized difference score, and the base free measure (Tucker et al). Residualized scores possessed smaller standard errors of measurement. (Author/BS)
Thomas, Edward V.; Stork, Christopher L.; Mattingly, John K.
2015-07-01
Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.
Error analysis of rigid body posture measurement system based on circular feature points
NASA Astrophysics Data System (ADS)
Huo, Ju; Cui, Jishan; Yang, Ning
2015-02-01
For monocular vision pose parameters determine the problem, feature-based target feature points on the plane quadrilateral, an improved two-stage iterative algorithm is proposed to improve the optimization of rigid body posture measurement calculating model. Monocular vision rigid body posture measurement system is designed; experimentally in each coordinate system determined coordinate a unified method to unify the each feature point measure coordinates; theoretical analysis sources of error from rigid body posture measurement system simulation experiments. Combined with the actual experimental analysis system under the condition of simulation error of pose accuracy of measurement, gives the comprehensive error of measurement system, for improving measurement precision of certain theoretical guiding significance.
Compensation method for the alignment angle error of a gear axis in profile deviation measurement
NASA Astrophysics Data System (ADS)
Fang, Suping; Liu, Yongsheng; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryuhei
2013-05-01
In the precision measurement of involute helical gears, the alignment angle error of a gear axis, which was caused by the assembly error of a gear measuring machine, will affect the measurement accuracy of profile deviation. A model of the involute helical gear is established under the condition that the alignment angle error of the gear axis exists. Based on the measurement theory of profile deviation, without changing the initial measurement method and data process of the gear measuring machine, a compensation method is proposed for the alignment angle error of the gear axis that is included in profile deviation measurement results. Using this method, the alignment angle error of the gear axis can be compensated for precisely. Some experiments that compare the residual alignment angle error of a gear axis after compensation for the initial alignment angle error were performed to verify the accuracy and feasibility of this method. Experimental results show that the residual alignment angle error of a gear axis included in the profile deviation measurement results is decreased by more than 85% after compensation, and this compensation method significantly improves the measurement accuracy of the profile deviation of involute helical gear.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Mohammed, Sandra M.; Şentürk, Damla; Dalrymple, Lorien S.; Nguyen, Danh V.
2012-01-01
Infection and cardiovascular disease are leading causes of hospitalization and death in older patients on dialysis. Our recent work found an increase in the relative incidence of cardiovascular outcomes during the ~ 30 days after infection-related hospitalizations using the case series model, which adjusts for measured and unmeasured baseline confounders. However, a major challenge in modeling/assessing the infection-cardiovascular risk hypothesis is that the exact time of infection, or more generally “exposure,” onsets cannot be ascertained based on hospitalization data. Only imprecise markers of the timing of infection onsets are available. Although there is a large literature on measurement error in the predictors in regression modeling, to date there is no work on measurement error on the timing of a time-varying exposure to our knowledge. Thus, we propose a new method, the measurement error case series (MECS) models, to account for measurement error in time-varying exposure onsets. We characterized the general nature of bias resulting from estimation that ignores measurement error and proposed a bias-corrected estimation for the MECS models. We examined in detail the accuracy of the proposed method to estimate the relative incidence. Hospitalization data from United States Renal Data System, which captures nearly all (> 99%) patients with end-stage renal disease in the U.S. over time, is used to illustrate the proposed method. The results suggest that the estimate of the cardiovascular incidence following the 30 days after infections, a period where acute effects of infection on vascular endothelium may be most pronounced, is substantially attenuated in the presence of infection onset measurement error. PMID:23650442
Mohammed, Sandra M; Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2012-12-01
Infection and cardiovascular disease are leading causes of hospitalization and death in older patients on dialysis. Our recent work found an increase in the relative incidence of cardiovascular outcomes during the ~ 30 days after infection-related hospitalizations using the case series model, which adjusts for measured and unmeasured baseline confounders. However, a major challenge in modeling/assessing the infection-cardiovascular risk hypothesis is that the exact time of infection, or more generally "exposure," onsets cannot be ascertained based on hospitalization data. Only imprecise markers of the timing of infection onsets are available. Although there is a large literature on measurement error in the predictors in regression modeling, to date there is no work on measurement error on the timing of a time-varying exposure to our knowledge. Thus, we propose a new method, the measurement error case series (MECS) models, to account for measurement error in time-varying exposure onsets. We characterized the general nature of bias resulting from estimation that ignores measurement error and proposed a bias-corrected estimation for the MECS models. We examined in detail the accuracy of the proposed method to estimate the relative incidence. Hospitalization data from United States Renal Data System, which captures nearly all (> 99%) patients with end-stage renal disease in the U.S. over time, is used to illustrate the proposed method. The results suggest that the estimate of the cardiovascular incidence following the 30 days after infections, a period where acute effects of infection on vascular endothelium may be most pronounced, is substantially attenuated in the presence of infection onset measurement error. PMID:23650442
NASA Astrophysics Data System (ADS)
Karimi, P.; Bastiaanssen, W. G. M.; Molden, D.
2012-11-01
Coping with the issue of water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links hydrological flows to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper we introduce Water Accounting Plus (WA+), which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use on the water cycle is described explicitly by defining land use groups with common characteristics. Analogous to financial accounting, WA+ presents four sheets including (i) a resource base sheet, (ii) a consumption sheet, (iii) a productivity sheet, and (iv) a withdrawal sheet. Every sheet encompasses a set of indicators that summarize the overall water resources situation. The impact of external (e.g. climate change) and internal influences (e.g. infrastructure building) can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used for 3 out of the 4 sheets, but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.
Detecting bit-flip errors in a logical qubit using stabilizer measurements
Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.
2015-01-01
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1979-01-01
The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.
Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports
ERIC Educational Resources Information Center
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary
2014-01-01
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125
A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation
NASA Astrophysics Data System (ADS)
Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei
2015-10-01
In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.
NASA Astrophysics Data System (ADS)
Liu, Chien-Hung; Jywe, Wen-Yuh; Lee, Hau-Wei
2004-09-01
A new spindle error measurement system has been developed in this paper. It employs a design development rotational fixture with a built-in laser diode and four batteries to replace a precision reference master ball or cylinder used in the traditional method. Two measuring devices with two position sensitive detectors (one is designed for the measurement of the compound X-axis and Y-axis errors and the other is designed with a lens for the measurement of the tilt angular errors) are fixed on the machine table to detect the laser point position from the laser diode in the rotational fixture. When the spindle rotates, the spindle error changes the direction of the laser beam. The laser beam is then divided into two separated beams by a beam splitter. The two separated beams are projected onto the two measuring devices and are detected by two position sensitive detectors, respectively. Thus, the compound motion errors and the tilt angular errors of the spindle can be obtained. Theoretical analysis and experimental tests are presented in this paper to separate the compound errors into two radial errors and tilt angular errors. This system is proposed as a new instrument and method for spindle metrology.
Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.
2015-02-23
vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less
Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.
2015-02-23
exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s^{-1} (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.
Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations
NASA Astrophysics Data System (ADS)
Toosi, Siavash; Larsson, Johan
2015-11-01
Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.
Compensation method for the alignment angle error in pitch deviation measurement
NASA Astrophysics Data System (ADS)
Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei
2016-05-01
When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
State-independent error-disturbance trade-off for measurement operators
NASA Astrophysics Data System (ADS)
Zhou, S. S.; Wu, Shengjun; Chau, H. F.
2016-05-01
In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions - one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.
Sources of strain-measurement error in flag-based extensometry
Luecke, W.E.; French, J.D.
1996-06-01
This paper examines the sources of error in strain measurement using flag-based extensometry that uses either scanning laser or electrooptical extensometers. These errors fall into two groups: errors in measuring the true gauge length of the specimen, which arise from the method of attachment of the flags, and errors arising from unanticipated distortions of the specimen during testing. The sources of errors of the first type include gauge-length errors from nonparallel flags and uncertainties in the true attachment point of the flag. During the test, strain-measurement errors of the second type can arise from horizontal translation of non-parallel flags, flag rotation that is induced by slippage, and flag motion from bending of the gauge length. Proper care can minimize the effect of these potential errors, so that flag-based extensometry can give accurate strain measurement, if appropriate precautions are taken. Measurements on silicon nitride indicate that the strain measurements are accurate to better than 10%.
ERIC Educational Resources Information Center
Uellendahl, Gail; Stephens, Diana; Buono, Lisa; Lewis, Rolla
2009-01-01
The need for greater accountability in school counseling practice is widely accepted within the profession. However, there are obstacles to making accountability efforts common practice among all school counselors. The Support Personnel Accountability Report Card (SPARC) is a tool that can be used to encourage and support these efforts. In this…
Calibration for the errors resulted from aberration in long focal length measurement
NASA Astrophysics Data System (ADS)
Yao, Jiang; Luo, Jia; He, Fan; Bai, Jian; Wang, Kaiwei; Hou, Xiyun; Hou, Changlun
2014-09-01
In this paper, a high-accuracy calibration method for errors resulted from aberration in long focal length measurement, is presented. Generally, Gaussian Equation is used for calculation without consideration of the errors caused by aberration. However, the errors are the key factor affecting the accuracy in the measurement system of a large aperture and long focal length lens. We creatively introduce an effective way to calibrate the errors, with detailed analysis of the long focal length measurement based on divergent light and Talbot interferometry. Aberration errors are simulated by Zemax. Then, we achieve auto-correction with the help of Visual C++ software and the experimental results reveal that the relative accuracy is better than 0.01%.By comparing modified values with experimental results obtained in knife-edge testing measurement, the proposed method is proved to be highly effective and reliable.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
NASA Astrophysics Data System (ADS)
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
Errors in anthropometric measurements in neonates and infants.
Harrison, D; Harker, H; Heese, H D; Mann, M D; Berelowitz, J
2001-05-01
The accuracy of methods used in Cape Town hospitals and clinics for the measurement of weight, length and age in neonates and infants became suspect during a survey of 12 local authority and 5 private sector clinics in 1994-1995 (Harrison et al. 1998). A descriptive prospective study to determine the accuracy of these methods in neonates at four maternity hospitals [2 public and 2 private] and infants at four child health clinics of the Cape Town City Council was carried out. The main outcome measures were an assessment of three currently used methods namely to measure crown-heel length with a measuring board, a mat and a tape measure; a comparison of weight differences when an infant is fully clothed, naked and in napkin only; and the differences in age estimated by calendar dates and by a specially designed electronic calculator. The results showed that the current methods which are used to measure infants in Cape Town vary widely from one institution to another. Many measurements are inaccurate and there is a real need for uniformity and accuracy. This can only be implemented by an effective education program so as to ensure that accurate measurements are used in monitoring the health of young children in Cape Town and elsewhere. PMID:11885471
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
Method of error analysis for phase-measuring algorithms applied to photoelasticity.
Quiroga, J A; González-Cano, A
1998-07-10
We present a method of error analysis that can be applied for phase-measuring algorithms applied to photoelasticity. We calculate the contributions to the measurement error of the different elements of a circular polariscope as perturbations of the Jones matrices associated with each element. The Jones matrix of the real polariscope can then be calculated as a sum of the nominal matrix and a series of contributions that depend on the errors associated with each element separately. We apply this method to the analysis of phase-measuring algorithms for the determination of isoclinics and isochromatics, including comparisons with real measurements. PMID:18285900
NASA Astrophysics Data System (ADS)
Karimi, P.; Bastiaanssen, W. G. M.; Molden, D.
2013-07-01
Coping with water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links depletion to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper, we introduce Water Accounting Plus (WA+), which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use and landscape evapotranspiration on the water cycle is described explicitly by defining land use groups with common characteristics. WA+ presents four sheets including (i) a resource base sheet, (ii) an evapotranspiration sheet, (iii) a productivity sheet, and (iv) a withdrawal sheet. Every sheet encompasses a set of indicators that summarise the overall water resources situation. The impact of external (e.g., climate change) and internal influences (e.g., infrastructure building) can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used to acquire a vast amount of required data but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
NASA Astrophysics Data System (ADS)
Gu, Honggang; Zhang, Chuanwei; Jiang, Hao; Chen, Xiuguo; Li, Weiqi; Liu, Shiyuan
2015-06-01
Dual-rotating compensator Mueller matrix ellipsometer (DRC-MME) has been designed and applied as a powerful tool for the characterization of thin films and nanostructures. The compensators are indispensable optical components and their performances affect the precision and accuracy of DRC-MME significantly. Biplates made of birefringent crystals are commonly used compensators in the DRC-MME, and their optical axes invariably have tilt errors due to imperfect fabrication and improper installation in practice. The axis tilt error between the rotation axis and the light beam will lead to a continuous vibration in the retardance of the rotating biplate, which further results in significant measurement errors in the Mueller matrix. In this paper, we propose a simple but valid formula for the retardance calculation under arbitrary tilt angle and azimuth angle to analyze the axis tilt errors in biplates. We further study the relations between the measurement errors in the Mueller matrix and the biplate axis tilt through simulations and experiments. We find that the axis tilt errors mainly affect the cross-talk from linear polarization to circular polarization and vice versa. In addition, the measurement errors in Mueller matrix increase acceleratively with the axis tilt errors in biplates, and the optimal retardance for reducing these errors is about 80°. This work can be expected to provide some guidences for the selection, installation and commissioning of the biplate compensator in DRC-MME design.
Slide error measurement of a large-scale ultra-precision lathe
NASA Astrophysics Data System (ADS)
Lee, Jung Chul; Gao, Wei; Noh, Young Jin; Hwang, Joo Ho; Oh, Jeoung Seok; Park, Chun Hong
2010-08-01
This paper presents the measurement of the slide error of a large-scale ultra-precision lathe with an effective fabricating length of 2000 mm. A cylinder workpiece with a diameter of 320 mm and a length of 1500 mm was mounted on the spindle of the lathe with its rotational axis along the Z-direction. Two capacitive displacement probes with a measurement range of 100 μm were mounted on the slide of lathe with its moving axis along the Z-direction. The displacement probes were placed on the two sides of the cylinder workpiece over the horizontal plane (XZ-plane). The cylinder workpiece, which was rotated by the spindle, was scanned by the displacement probes moved by the slide. The X-directional horizontal slide error can be accurately evaluated from the probe outputs by using a proposed rotatingreversal method through separating the influences of the form error of the cylinder workpiece and the rotational error of the spindle. In addition to the out-of-straightness error component, the parallelism error component with respect to the spindle axis, can also be evaluated. The out-of-straightness error component and the parallelism error component of the slide error were measured to be 3.3 μm and 1.68 arc-seconds over a slide travel range of 1450.08 mm, respectively.
Mints, M.Ya.; Chinkov, V.N.
1995-09-01
Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
NASA Astrophysics Data System (ADS)
Gómez de León, F. C.; Meroño Pérez, P. A.
2010-07-01
The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement.
Ambient Temperature Changes and the Impact to Time Measurement Error
NASA Astrophysics Data System (ADS)
Ogrizovic, V.; Gucevic, J.; Delcev, S.
2012-12-01
Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.
Exhaled Nitric Oxide: Sources of Error in Offline Measurement
LINN, WILLIAM S.; AVILA, MARISELA; GONG, HENRY
2007-01-01
Delayed offline measurement of exhaled nitric oxide (eNO), although useful in environmental and clinical research, is limited by the instability of stored breath samples. The authors characterized sources of instability with the goal of minimizing them. Breath and other air samples were stored under various conditions, and NO levels were measured repeatedly over 1–7 d. Concentration change rates varied positively with temperature and negatively with initial NO level, thus “stable” levels reflected a balance of NO-adding and NO-removing processes. Storage under refrigeration for a standardized period of time can optimize offline eNO measurement, although samples at room temperature are effectively stable for several hours. PMID:16268114
Error Sources in the ETA Energy Analyzer Measurement
Nexsen, W E
2004-12-13
At present the ETA beam energy as measured by the ETA energy analyzer and the DARHT spectrometer differ by {approx}12%. This discrepancy is due to two sources, an overestimate of the effective length of the ETA energy analyzer bending-field, and data reduction methods that are not valid. The discrepancy can be eliminated if we return to the original process of measuring the angular deflection of the beam and use a value of 43.2cm for the effective length of the axial field profile.
Improving surface energy balance closure by reducing errors in soil heat flux measurement
Technology Transfer Automated Retrieval System (TEKTRAN)
The flux plate method is the most commonly employed method for measuring soil heat flux (G) in surface energy balance studies. Although relatively simple to use, the flux plate method is susceptible to significant errors. Two of the most common errors are heat flow divergence around the plate and fa...
Measurement, Sampling, and Equating Errors in Large-Scale Assessments
ERIC Educational Resources Information Center
Wu, Margaret
2010-01-01
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Some aspects of error influences in interferometric measurements of optical surface forms
NASA Astrophysics Data System (ADS)
Schulz, M.; Wiegmann, A.
2011-05-01
Interferometry is often used to measure the form of optical surfaces. While interferometry is generally expected to give high accuracy results, a variety of error influences exist which have to be considered. Some typical error influences which are often underestimated will be discussed in this paper. In flatness metrology, the main error influences are imperfections of the reference surfaces, specimen support or cavity influences. For non-flat surfaces like aspheres or free form surfaces, in particular the influence of errors from the determination of the lateral coordinates becomes very important. Sub-aperture interferometry copes with stitching errors, which can be reduced by Traceable Multi Sensor subaperture methods where the influence of the imaging system of the interferometer may dominate the error budget. This can be similar for other types of interferometers.
Measurement error of surface-mounted fiber Bragg grating temperature sensor.
Yi, Liu; Zude, Zhou; Erlong, Zhang; Jun, Zhang; Yuegang, Tan; Mingyao, Liu
2014-06-01
Fiber Bragg grating (FBG) sensors are extensively used to measure surface temperatures. However, the temperature gradient effect of a surface-mounted FBG sensor is often overlooked. A surface-type temperature standard setup was prepared in this study to investigate the measurement errors of FBG temperature sensors. Experimental results show that the measurement error of a bare fiber sensor has an obvious linear relationship with surface temperature, with the largest error achieved at 8.1 °C. Sensors packaged with heat conduction grease generate smaller measurement errors than do bare FBG sensors and commercial thermal resistors. Thus, high-quality packaged methods and proper modes of fixation can effectively improve the accuracy of FBG sensors in measuring surface temperatures. PMID:24985840
Measurement of size error in industrial CT system with Calotte cube
NASA Astrophysics Data System (ADS)
Wang, DaoDang; Chen, XiXi; Wang, FuMin; Shi, YuShu; Kong, Ming; Zhao, Jun
2015-02-01
A measurement method with calotte cube was proposed to realize the high-precision calibration of size error in industrial computer tomography (CT) system. Using the traceability of calotte cube, the measurement of the repeatability error, probing error and length measurement error of industrial CT system was carried out to increase the acceptance of CT as a metrological method. The main error factors, including the material absorption, projection number and integration time and so on, had been studied in detail. Experimental results show that the proposed measurement method provides a feasible way to measure the size error of industrial CT system. Compared with the measurement results with invar 27- sphere gauge, a high accuracy in the order of microns is realized with the proposed method based on calotte cube. Differing from the invar 27-sphere gauge method, the material particularity of calotte cube (material of metallic titanium) could introduce beam hardening effect, the study on the influence of material absorption and structural specificity on the measurement, which provides significant reference for the measurement of metallic samples, is necessary.
Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study
NASA Astrophysics Data System (ADS)
Bogren, W.; Kylling, A.; Burkhart, J. F.
2015-12-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Bateson, Thomas F; Wright, J Michael
2010-08-01
Environmental epidemiologic studies are often hierarchical in nature if they estimate individuals' personal exposures using ambient metrics. Local samples are indirect surrogate measures of true local pollutant concentrations which estimate true personal exposures. These ambient metrics include classical-type nondifferential measurement error. The authors simulated subjects' true exposures and their corresponding surrogate exposures as the mean of local samples and assessed the amount of bias attributable to classical and Berkson measurement error on odds ratios, assuming that the logit of risk depends on true individual-level exposure. The authors calibrated surrogate exposures using scalar transformation functions based on observed within- and between-locality variances and compared regression-calibrated results with naive results using surrogate exposures. The authors further assessed the performance of regression calibration in the presence of Berkson-type error. Following calibration, bias due to classical-type measurement error, resulting in as much as 50% attenuation in naive regression estimates, was eliminated. Berkson-type error appeared to attenuate logistic regression results less than 1%. This regression calibration method reduces effects of classical measurement error that are typical of epidemiologic studies using multiple local surrogate exposures as indirect surrogate exposures for unobserved individual exposures. Berkson-type error did not alter the performance of regression calibration. This regression calibration method does not require a supplemental validation study to compute an attenuation factor. PMID:20573838
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Stray signal requirements for compact range reflectors based on RCS measurement errors
NASA Technical Reports Server (NTRS)
Lee, Teh-Hong; Burnside, Walter D.
1991-01-01
The authors present a performance criterion for compact range reflectors such that their edge diffracted stray signal levels meet a reasonable radar cross section (RCS) measurement error requirement. It is shown by example that one of the significant error sources is the diffracted fields emanating from the edges or junctions of the reflector. This measurement error is demonstrated by placing a diagonal square flat plate in the target zone and rotating it to appropriate angles. These angles are determined by bisecting the plane wave and stray signal directions. This results in a peak bistatic measurement of the edge diffracted stray signal. It is proposed that the diagonal flat plate be used to evaluate new reflector designs as well as existing systems. A reasonable stray signal performance level has been developed so that new reflector systems can be characterized in terms of an RCS measurement error requirement.
Image pre-filtering for measurement error reduction in digital image correlation
NASA Astrophysics Data System (ADS)
Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing
2015-02-01
In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Errors in measuring sagittal arch kinematics of the human foot with digital fluoroscopy.
Wearing, Scott C; Smeathers, James E; Yates, Bede; Sullivan, Patrick M; Urry, Stephen R; Dubois, Philip
2005-04-01
Although fluoroscopy has been used to evaluate motion of the foot during gait, the accuracy and precision of fluoroscopic measures of osseous structures of the foot has not been reported in the literature. This study reports on a series of experiments that quantify the magnitude and sources of error involved in digital fluoroscopic measurements of the medial longitudinal arch. The findings indicate that with a global distortion correction procedure, errors arising from image distortion can be reduced threefold to 0.2 degrees for angular measurements and to 0.1 mm for linear measures. The limits of agreement for repeated angular measures of the calcaneus and first metatarsal were +/-0.5 degrees and +/-0.6 degrees , indicating that measurement error was primarily associated with the manual process of digitisation. While the magnitude of the residual error constitutes about +/-2.5% of the expected 20 degrees of movement of the calcaneus and first metatarsal, out-of-plane rotation may potentially contribute the greatest source of error in fluoroscopic measures of the foot. However, even at the extremes of angular displacement (15 degrees ) reported for the calcaneum during running gait, the root mean square (RMS) error was only about 1 degrees . Thus, errors associated with fluoroscopic imaging of the foot appear to be negligible when compared to those arising from skin movement artefact, which typically range between 1.5 and 4 mm (equating to errors of 2 degrees to 17 degrees for angular measures). Fluoroscopy, therefore, may be a useful technique for analysing the sagittal movement of the medial longitudinal arch during the contact phase of walking. PMID:15760749
On the errors in measuring the particle density by the light absorption method
Ochkin, V. N.
2015-04-15
The accuracy of absorption measurements of the density of particles in a given quantum state as a function of the light absorption coefficient is analyzed. Errors caused by the finite accuracy in measuring the intensity of the light passing through a medium in the presence of different types of noise in the recorded signal are considered. Optimal values of the absorption coefficient and the factors capable of multiplying errors when deviating from these values are determined.
Kim, Yangjin; Hibino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru
2016-08-10
In this research, the susceptibility of the phase-shifting algorithms to the random intensity error is formulated and estimated. The susceptibility of the random intensity error of conventional windowed phase-shifting algorithms is discussed, and the 7N-6 phase-shifting algorithm is developed to minimize the random intensity error using the characteristic polynomial theory. Finally, the surface shape of the transparent wedge plate is measured using a wavelength-tuning Fizeau interferometer and the 7N-6 algorithm. The experimental results indicate that the surface shape measurement accuracy for the transparent plate is 2.5 nm. PMID:27534496
NASA Technical Reports Server (NTRS)
Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.
1994-01-01
Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.
Pekmezci, Murat; Karaeminogulları, Oguz; Acaroglu, Emre; Yazıcı, Muharrem; Cil, Akın; Pijnenburg, Bas; Genç, Yasemin; Oner, Fethullah C.
2007-01-01
Cobb method has been shown to be the most reliable technique with a reasonable measurement error to determine the kyphosis in fresh fractures of young patients. However, measurement errors may be higher for elderly patients as it may be difficult to determine the landmarks due to osteopenia and the degenerative changes. The aim of this study is to investigate the intrinsic error for different techniques used in evaluation of local sagittal plane deformity caused by OVCF. Lateral X-rays of OVCF patients were randomly selected. Patient group was composed of 28 females and 7 males and the mean age was 62.7 (55–75) years. The kyphosis angle and the vertebral body height were analyzed to reveal the severity of sagittal plane deformity. Kyphotic deformity was measured by using four different techniques; and the vertebral body heights (VBH) were measured at three different points. The mean intra-observer agreement interval for kyphosis angle measurement techniques ranged from ±7.1 to ±9.3° while it ranged from ±4.5 to ±6.5 mm for VBH measurement techniques. The mean interobserver agreement interval for kyphosis angle ranged from ±8.2 to ±11.1°, while it was between ±4.5 to ±6.5 mm for vertebral body height measurement techniques. This study revealed that although the intra and interobserver agreement were similar for all techniques, they are still higher than expected. These high intervals for measurement errors should be taken into account when interpreting the results of correction in local sagittal plane deformities of OVCF patients after surgical procedures such as vertebral augmentation techniques. PMID:17912558
Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A
2011-02-15
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302
Error Analysis of Cine Phase Contrast MRI Velocity Measurements used for Strain Calculation
Jensen, Elisabeth R.; Morrow, Duane A.; Felmlee, Joel P.; Odegard, Gregory M.; Kaufman, Kenton R.
2014-01-01
Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4mm/s after removal of systematic error – a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3mm/s. Measured random error was between 1 to 1.4mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. PMID:25433567
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2011-05-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2010-12-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
Nystrom, E.A.; Oberg, K.A.; Rehmann, C.R.
2002-01-01
Acoustic Doppler current profilers (ADCPs) provide a promising method for measuring surface-water turbulence because they can provide data from a large spatial range in a relatively short time with relative ease. Some potential sources of errors in turbulence measurements made with ADCPs include inaccuracy of Doppler-shift measurements, poor temporal and spatial measurement resolution, and inaccuracy of multi-dimensional velocities resolved from one-dimensional velocities measured at separate locations. Results from laboratory measurements of mean velocity and turbulence statistics made with two pulse-coherent ADCPs in 0.87 meters of water are used to illustrate several of inherent sources of error in ADCP turbulence measurements. Results show that processing algorithms and beam configurations have important effects on turbulence measurements. ADCPs can provide reasonable estimates of many turbulence parameters; however, the accuracy of turbulence measurements made with commercially available ADCPs is often poor in comparison to standard measurement techniques.
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health. PMID:21823979
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
Error reduction by combining strapdown inertial measurement units in a baseball stitch
NASA Astrophysics Data System (ADS)
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.
Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.
2012-01-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
NASA Astrophysics Data System (ADS)
Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang; Hwang, Ching-Shiang
2016-08-01
The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.
Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S
2016-02-01
One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors
Effect of patient positions on measurement errors of the knee-joint space on radiographs
NASA Astrophysics Data System (ADS)
Gilewska, Grazyna
2001-08-01
Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.
Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s.
Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A
2004-10-01
Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M(1)) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6-4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691
Determination of error measurement by means of the basic magnetization curve
NASA Astrophysics Data System (ADS)
Lankin, M. V.; Lankin, A. M.
2016-04-01
The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.
Muralikrishnan, B; Blackburn, C; Sawyer, D; Phillips, S; Bridges, R
2010-01-01
We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder's error map to improve the tracker's angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error. PMID:27134789
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2009-01-01
Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.
Estimation of bias errors in angle-of-arrival measurements using platform motion
NASA Astrophysics Data System (ADS)
Grindlay, A.
1981-08-01
An algorithm has been developed to estimate the bias errors in angle-of-arrival measurements made by electromagnetic detection devices on-board a pitching and rolling platform. The algorithm assumes that continuous exact measurements of the platform's roll and pitch conditions are available. When the roll and pitch conditions are used to transform deck-plane angular measurements of a nearly fixed target's position to a stabilized coordinate system, the resulting stabilized coordinates (azimuth and elevation) should not vary with changes in the roll and pitch conditions. If changes do occur they are a result of bias errors in the measurement system and the algorithm which has been developed uses these changes to estimate the sense and magnitude of angular bias errors.
NASA Astrophysics Data System (ADS)
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-06-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-01-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
Estimation of bias errors in measured airplane responses using maximum likelihood method
NASA Technical Reports Server (NTRS)
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Pollack, A. Z.; Perkins, N. J.; Mumford, S. L.; Ye, A.; Schisterman, E. F.
2013-01-01
Utilizing multiple biomarkers is increasingly common in epidemiology. However, the combined impact of correlated exposure measurement error, unmeasured confounding, interaction, and limits of detection (LODs) on inference for multiple biomarkers is unknown. We conducted data-driven simulations evaluating bias from correlated measurement error with varying reliability coefficients (R), odds ratios (ORs), levels of correlation between exposures and error, LODs, and interactions. Blood cadmium and lead levels in relation to anovulation served as the motivating example, based on findings from the BioCycle Study (2005–2007). For most scenarios, main-effect estimates for cadmium and lead with increasing levels of positively correlated measurement error created increasing downward or upward bias for OR > 1.00 and OR < 1.00, respectively, that was also a function of effect size. Some scenarios showed bias for cadmium away from the null. Results subject to LODs were similar. Bias for main and interaction effects ranged from −130% to 36% and from −144% to 84%, respectively. A closed-form continuous outcome case solution provides a useful tool for estimating the bias in logistic regression. Investigators should consider how measurement error and LODs may bias findings when examining biomarkers measured in the same medium, prepared with the same process, or analyzed using the same method. PMID:23221725
Pollack, A Z; Perkins, N J; Mumford, S L; Ye, A; Schisterman, E F
2013-01-01
Utilizing multiple biomarkers is increasingly common in epidemiology. However, the combined impact of correlated exposure measurement error, unmeasured confounding, interaction, and limits of detection (LODs) on inference for multiple biomarkers is unknown. We conducted data-driven simulations evaluating bias from correlated measurement error with varying reliability coefficients (R), odds ratios (ORs), levels of correlation between exposures and error, LODs, and interactions. Blood cadmium and lead levels in relation to anovulation served as the motivating example, based on findings from the BioCycle Study (2005-2007). For most scenarios, main-effect estimates for cadmium and lead with increasing levels of positively correlated measurement error created increasing downward or upward bias for OR > 1.00 and OR < 1.00, respectively, that was also a function of effect size. Some scenarios showed bias for cadmium away from the null. Results subject to LODs were similar. Bias for main and interaction effects ranged from -130% to 36% and from -144% to 84%, respectively. A closed-form continuous outcome case solution provides a useful tool for estimating the bias in logistic regression. Investigators should consider how measurement error and LODs may bias findings when examining biomarkers measured in the same medium, prepared with the same process, or analyzed using the same method. PMID:23221725
NASA Astrophysics Data System (ADS)
Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.
2016-09-01
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan
2012-07-10
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
Discontinuity, bubbles, and translucence: major error factors in food color measurement
NASA Astrophysics Data System (ADS)
MacDougall, Douglas B.
2002-06-01
Four samples of breakfast cereals exhibiting discontinuity, two samples of baked goods with bubbles and two translucent drinks were measured to show the degree of differences that exist between their colors measured in CIELAB and their visual equivalence to the nearest NCS atlas color. Presentation variables and the contribution of light scatter to the size of the errors were examined.
NASA Astrophysics Data System (ADS)
Ahn, Charlene Sonja
Quantum mechanical applications range from quantum computers to quantum key distribution to teleportation. In these applications, quantum error correction is extremely important for protecting quantum states against decoherence. Here I present two main results regarding quantum error correction protocols. The first main topic I address is the development of continuous-time quantum error correction protocols via combination with techniques from quantum control. These protocols rely on weak measurement and Hamiltonian feedback instead of the projective measurements and unitary gates usually assumed by canonical quantum error correction. I show that a subclass of these protocols can be understood as a quantum feedback protocol, and analytically analyze the general case using the stabilizer formalism; I show that in this case perfect feedback can perfectly protect a stabilizer subspace. I also show through numerical simulations that another subclass of these protocols does better than canonical quantum error correction when the time between corrections is limited. The second main topic is development of improved overhead results for fault-tolerant computation. In particular, through analysis of topological quantum error correcting codes, it will be shown that the required blowup in depth of a noisy circuit performing a fault-tolerant computation can be reduced to a factor of O(log log L), an improvement over previous results. Showing this requires investigation into a local method of performing fault-tolerant correction on a topological code of arbitrary dimension.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Estimating smooth distribution function in the presence of heteroscedastic measurement errors
Wang, Xiao-Feng; Fan, Zhaozhi; Wang, Bin
2009-01-01
Measurement error occurs in many biomedical fields. The challenges arise when errors are heteroscedastic since we literally have only one observation for each error distribution. This paper concerns the estimation of smooth distribution function when data are contaminated with heteroscedastic errors. We study two types of methods to recover the unknown distribution function: a Fourier-type deconvolution method and a simulation extrapolation (SIMEX) method. The asymptotics of the two estimators are explored and the asymptotic pointwise confidence bands of the SIMEX estimator are obtained. The finite sample performances of the two estimators are evaluated through a simulation study. Finally, we illustrate the methods with medical rehabilitation data from a neuro-muscular electrical stimulation experiment. PMID:20160998
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Wu Yan; Shannon, Mark A.
2006-04-15
The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed.
Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems
Carroll, Raymond J.
2015-01-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis–Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work. PMID:27418743
NASA Technical Reports Server (NTRS)
Parrott, T. L.; Smith, C. D.
1977-01-01
The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.
Measurement error models in chemical mass balance analysis of air quality data
NASA Astrophysics Data System (ADS)
Christensen, William F.; Gunst, Richard F.
The chemical mass balance (CMB) equations have been used to apportion observed pollutant concentrations to their various pollution sources. Typical analyses incorporate estimated pollution source profiles, estimated source profile error variances, and error variances associated with the ambient measurement process. Often the CMB model is fit to the data using an iteratively re-weighted least-squares algorithm to obtain the effective variance solution. We consider the chemical mass balance model within the framework of the statistical measurement error model (e.g., Fuller, W.A., Measurement Error Models, Wiley, NewYork, 1987), and we illustrate that the models assumed by each of the approaches to the CMB equations are in fact special cases of a general measurement error model. We compare alternative source contribution estimators with the commonly used effective variance estimator when standard assumptions are valid and when such assumptions are violated. Four approaches for source contribution estimation and inference are compared using computer simulation: weighted least squares (with standard errors adjusted for source profile error), the effective variance approach of Watson et al. (Atmos, Environ., 18, 1984, 1347), the Britt and Luecke (Technometrics, 15, 1973, 233) approach, and a method of moments approach given in Fuller (1987, p. 193). For the scenarios we consider, the simplistic weighted least-squares approach performs as well as the more widely used effective variance solution in most cases, and is slightly superior to the effective variance solution when source profile variability is large. The four estimation approaches are illustrated using real PM 2.5 data from Fresno and the conclusions drawn from the computer simulation are validated.
Study on position error of fiber positioning measurement system for LAMOST
NASA Astrophysics Data System (ADS)
Jin, Yi; Zhai, Chao; Xing, Xiaozheng; Teng, Yong; Hu, Hongzhuan
2006-06-01
An investigation on measuring precision of the measurement system is carried on, which is applied to optical fiber positioning system for LAMOST. In the fiber positioning system, geometrical coordinates of fibers need to be measured in order to verify the precision of fiber positioning and it is one of the most pivotal problems. The measurement system consists of an area CCD sensor, an image acquisition card, a lens and a computer. Temperature, vibration, lens aberration and CCD itself will probably cause measuring error. As fiber positioning is a dynamic process and fibers are reversing, this will make additional error. The paper focuses on analyzing the influence to measuring precision which is made by different status of fibers. The fibers are stuck to keep the relative positions steady which can rotate around the same point. The distances between fibers are measured under different experimental conditions. Then the influence of fibers' status can be obtained from the change of distances. Influence to position error made by different factors is analyzed according to the theory and experiments. Position error would be decreased by changing a lens aperture setting and polishing fibers.
Systematic errors in the measurement of emissivity caused by directional effects.
Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan
2003-04-01
Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use ofthe infrared 8-14-microm band.This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement. PMID:12683764
Martin, D.L.
1992-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.
Baxter, Lisa K.; Chang, Howard H.
2014-01-01
Background: Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from multipollutant models difficult to interpret. Objectives: We aimed to quantify relationships between multiple pollutants and their associated exposure errors across metrics of exposure and to use empirical values to evaluate potential attenuation of coefficients in epidemiologic models. Methods: We used three daily exposure metrics (central-site measurements, air quality model estimates, and population exposure model estimates) for 193 ZIP codes in the Atlanta, Georgia, metropolitan area from 1999 through 2002 for PM2.5 and its components (EC and SO4), as well as O3, CO, and NOx, to construct three types of exposure error: δspatial (comparing air quality model estimates to central-site measurements), δpopulation (comparing population exposure model estimates to air quality model estimates), and δtotal (comparing population exposure model estimates to central-site measurements). We compared exposure metrics and exposure errors within and across pollutants and derived attenuation factors (ratio of observed to true coefficient for pollutant of interest) for single- and bipollutant model coefficients. Results: Pollutant concentrations and their exposure errors were moderately to highly correlated (typically, > 0.5), especially for CO, NOx, and EC (i.e., “local” pollutants); correlations differed across exposure metrics and types of exposure error. Spatial variability was evident, with variance of exposure error for local pollutants ranging from 0.25 to 0.83 for δspatial and δtotal. The attenuation of model coefficients in single- and bipollutant epidemiologic models relative to the true value differed across types of exposure error, pollutants, and space. Conclusions: Under a
Impact of Measurement Error on Testing Genetic Association with Quantitative Traits
Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E. Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu
2014-01-01
Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10−5) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218
Correction of Abbe error in involute gear measurement using a laser interferometric system
NASA Astrophysics Data System (ADS)
Lin, Hu; Xue, Zi; Yang, Guoliang
2015-10-01
For correction of Abbe error in involute gear measurement, a laser interferometric measuring system is applied, in this system, the laser beam is split into two paths, one path is arranged tangent to the base circle of gear for measurement of profile, another path is arranged parallel to the gear axis for measurement of helix, two cube-corner reflectors are attached at the end of probe stylus closing to the tip, by this approach, the length offset between probe tip and reference scale is minimized , finally, the Abbe error is decreased. On another hand, the laser measuring error is caused by bending of stylus, the mathematic relationship between amount of bending and probe deflection is deduced. To determine the parameters in this mathematic relationship, two sizes of stylus are used for experiments. Experiments are carried out in a range of +/-0.8mm for probe deflection. Results show that the amount of stylus bending is linear with deflection of probe, the laser measuring error caused by stylus bending will be smaller than 0.3μm after correction.
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in
NASA Astrophysics Data System (ADS)
Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.
2011-12-01
Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314
NASA Astrophysics Data System (ADS)
Ma, Xiushui; Fei, Yetai; Wang, Hongtao; Ying, Zhongyang; Li, Guang
2006-11-01
Modern manufacturing increasingly places a high requirement on the speed and accuracy of Coordinate Measuring Machines (CMMs). Measuring speed has become one of the key factors in evaluating the performance of CMMs. In high speed measuring, dynamic error will have a greater influence on accuracy. This paper tests the dynamic error of CMM's measuring system under different measuring positions and speeds using the dual frequency laser interferometer. Based on measured data, the modeling of synthetic dynamic errors is set up adopting the dual linear returns method. Comparing with the measured data, the relative error of modeling is between 15% to 20%, the returns equation is prominent at α=0.01 level, verified by "F". Based on the modeling of synthetic dynamic errors under different measuring positions and speeds, the measuring system dynamic error of CMMs is corrected and reduced.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.
Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal
2016-05-01
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror. PMID:27250374
Observation of spectrum effect on the measurement of intrinsic error field on EAST
NASA Astrophysics Data System (ADS)
Wang, Hui-Hui; Sun, You-Wen; Qian, Jin-Ping; Shi, Tong-Hui; Shen, Biao; Gu, Shuai; Liu, Yue-Qiang; Guo, Wen-Feng; Chu, Nan; He, Kai-Yang; Jia, Man-Ni; Chen, Da-Long; Xue, Min-Min; Ren, Jie; Wang, Yong; Sheng, Zhi-Cai; Xiao, Bing-Jia; Luo, Zheng-Ping; Liu, Yong; Liu, Hai-Qing; Zhao, Hai-Lin; Zeng, Long; Gong, Xian-Zu; Liang, Yun-Feng; Wan, Bao-Nian; The EAST Team
2016-06-01
Intrinsic error field on EAST is measured using the ‘compass scan’ technique with different n = 1 magnetic perturbation coil configurations in ohmically heated discharges. The intrinsic error field measured using a non-resonant dominated spectrum with even connection of the upper and lower resonant magnetic perturbation coils is of the order {{b}r2,1}/{{B}\\text{T}}≃ {{10}-5} and the toroidal phase of intrinsic error field is around {{60}{^\\circ}} . A clear difference between the results using the two coil configurations, resonant and non-resonant dominated spectra, is observed. The ‘resonant’ and ‘non-resonant’ terminology is based on vacuum modeling. The penetration thresholds of the non-resonant dominated cases are much smaller than that of the resonant cases. The difference of penetration thresholds between the resonant and non-resonant cases is reduced by plasma response modeling using the MARS-F code.
Barshan, Billur
2008-01-01
An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.
An error compensation method of laser displacement sensor in the inclined surface measurement
NASA Astrophysics Data System (ADS)
Li, Feng; Xiong, Zhongxing; Li, Bin
2015-10-01
Laser triangulation displacement sensor is an important tool in non-contact displacement measurement which has been widely used in the filed of freeform surface measurement. However, measurement accuracy of such optical sensors is very likely to be influenced by the geometrical shape and face properties of the inspected surfaces. This study presents an error compensation method for the measurement of inclined surfaces using a 1D laser displacement sensor. The effect of the incident angle on the measurement results was investigated by analyzing the laser spot projected on the inclined surface. Both the shape and the light intensity distribution of the spot will be influenced by the incident angle, which lead to the measurement error. As the beam light spot size is different at different measurement position according to Gaussian beam propagating laws, the light spot projectted on the inclinde surface will be an ellipse approximatively. It's important to note that this ellipse isn't full symmetrical because the spot size of Gaussian beam is different at different position. By analyzing the laws of the shape change, the error compensation model can be established. This method is verified through the measurement of an ceramic plane mounted on a high-accuracy 5-axis Mikron UCP 800 Duro milling center. The results show that the method is effective in increasing the measurement accuracy.
Design considerations for case series models with exposure onset measurement error
Mohammed, Sandra M.; Dalrymple, Lorien S.; Şentürk, Damla; Nguyen, Danh V.
2014-01-01
Summary The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared to the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. PMID:22911898
Cole, Stephen R.; Jacobson, Lisa P.; Tien, Phyllis C.; Kingsley, Lawrence; Chmiel, Joan S.; Anastos, Kathryn
2010-01-01
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding. PMID:19934191
Wide-aperture laser beam measurement using transmission diffuser: errors modeling
NASA Astrophysics Data System (ADS)
Matsak, Ivan S.
2015-06-01
Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
NASA Technical Reports Server (NTRS)
Mebratu, Derssie; Kegege, Obadiah; Shaw, Harry
2016-01-01
Digital signal transmits via a carrier wave, demodulates at a receiver and locates an ideal constellation position. However, a noise distortion, carrier leakage and phase noise divert an actual constellation position of a signal and locate to a new position. In order to assess a source of noise and carrier leakage, Bit Error Rate (BER) measurement technique is also used to evaluate the number of erroneous bit per bit transmitted signal. In addition, we present, Error Vector Magnitude (EVM), which measures an ideal and a new position, assesses a source of signal distortion, and evaluates a wireless communication system's performance with a single metric. Applying EVM technique, we also measure the performance of a User Services Subsystem Component Replacement (USSCR) modem. Furthermore, we propose EVM measurement technique in the Tracking and Data Relay Satellite system (TDRS) to measure and evaluate a channel impairment between a ground (transmitter) and the terminal (receiver) at White Sands Complex.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
Dong, Zhichao; Cheng, Haobo; Feng, Yunpeng; Su, Jingshi; Wu, Hengyu; Tam, Hon-Yuen
2015-07-01
This study presents a subaperture stitching method to calibrate system errors of several ∼2 m large scale 3D profile measurement instruments (PMIs). The calibration process was carried out by measuring a Φ460 mm standard flat sample multiple times at different sites of the PMI with a length gauge; then the subaperture data were stitched together using a sequential or simultaneous stitching algorithm that minimizes the inconsistency (i.e., difference) of the discrete data in the overlapped areas. The system error can be used to compensate the measurement results of not only large flats, but also spheres and aspheres. The feasibility of the calibration was validated by measuring a Φ1070 mm aspheric mirror, which can raise the measurement accuracy of PMIs and provide more reliable 3D surface profiles for guiding grinding, lapping, and even initial polishing processes. PMID:26193139
Point cloud uncertainty analysis for laser radar measurement system based on error ellipsoid model
NASA Astrophysics Data System (ADS)
Zhengchun, Du; Zhaoyong, Wu; Jianguo, Yang
2016-04-01
Three-dimensional laser scanning has become an increasingly popular measurement method in industrial fields as it provides a non-contact means of measuring large objects, whereas the conventional methods are contact-based. However, the data acquisition process is subject to many interference factors, which inevitably cause errors. Therefore, it is necessary to precisely evaluate the accuracy of the measurement results. In this study, an error-ellipsoid-based uncertainty model was applied to 3D laser radar measurement system (LRMS) data. First, a spatial point uncertainty distribution map was constructed according to the error ellipsoid attributes. The single-point uncertainty ellipsoid model was then extended to point-point, point-plane, and plane-plane situations, and the corresponding distance uncertainty models were derived. Finally, verification experiments were performed by using an LRMS to measure the height of a cubic object, and the measurement accuracies were evaluated. The results show that the plane-plane distance uncertainties determined based on the ellipsoid model are comparable to those obtained by actual distance measurements. Thus, this model offers solid theoretical support to enable further LRMS measurement accuracy improvement.
NASA Astrophysics Data System (ADS)
Du, Z. C.; Lv, C. F.; Hong, M. S.
2006-10-01
A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'.
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
Fast error simulation of optical 3D measurements at translucent objects
NASA Astrophysics Data System (ADS)
Lutzke, P.; Kühmstedt, P.; Notni, G.
2012-09-01
The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.
The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.
2015-01-01
Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…
Sensitivity of Force Specifications to the Errors in Measuring the Interface Force
NASA Technical Reports Server (NTRS)
Worth, Daniel
2000-01-01
Force-Limited Random Vibration Testing has been applied in the last several years at the NASA Goddard Space Flight Center (GSFC) and other NASA centers for various programs at the instrument and spacecraft level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the flight environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. This paper will show the effects of some measurement and calibration errors in force gauges. In some cases, the notches in the acceleration spectrum when a random vibration test is performed with measurement errors are the same as the notches produced during a test that has no measurement errors. The paper will also present the results Of tests that were used to validate this effect. Knowing the effect of measurement errors can allow tests to continue after force gauge failures or allow dummy gauges to be used in places that are inaccessible to a force gage.
Estimating Conditional Standard Errors of Measurement for Tests Composed of Testlets.
ERIC Educational Resources Information Center
Lee, Guemin
The primary purpose of this study was to investigate the appropriateness and implication of incorporating a testlet definition into the estimation of the conditional standard error of measurement (SEM) for tests composed of testlets. The five conditional SEM estimation methods used in this study were classified into two categories: item-based and…
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2011-01-01
This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…
Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure
ERIC Educational Resources Information Center
Padilla, Miguel A.; Veprinsky, Anna
2012-01-01
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.
ERIC Educational Resources Information Center
Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.
2001-01-01
Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…
A new method for dealing with measurement error in explanatory variables of regression models.
Freedman, Laurence S; Fainberg, Vitaly; Kipnis, Victor; Midthune, Douglas; Carroll, Raymond J
2004-03-01
We introduce a new method, moment reconstruction, of correcting for measurement error in covariates in regression models. The central idea is similar to regression calibration in that the values of the covariates that are measured with error are replaced by "adjusted" values. In regression calibration the adjusted value is the expectation of the true value conditional on the measured value. In moment reconstruction the adjusted value is the variance-preserving empirical Bayes estimate of the true value conditional on the outcome variable. The adjusted values thereby have the same first two moments and the same covariance with the outcome variable as the unobserved "true" covariate values. We show that moment reconstruction is equivalent to regression calibration in the case of linear regression, but leads to different results for logistic regression. For case-control studies with logistic regression and covariates that are normally distributed within cases and controls, we show that the resulting estimates of the regression coefficients are consistent. In simulations we demonstrate that for logistic regression, moment reconstruction carries less bias than regression calibration, and for case-control studies is superior in mean-square error to the standard regression calibration approach. Finally, we give an example of the use of moment reconstruction in linear discriminant analysis and a nonstandard problem where we wish to adjust a classification tree for measurement error in the explanatory variables. PMID:15032787
Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis
ERIC Educational Resources Information Center
Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara
2014-01-01
This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…
ERIC Educational Resources Information Center
Kim, Sehwan; McLeod, Jonnie H.; Williams, Charles; Hepler, Nancy
2000-01-01
Faced with the absence of a conceptual framework and the terminology for establishing evaluation criteria in the substance abuse prevention services field, this special issue is devoted to exploring the topics of accountability and performance measures. It discusses the requisite components (i.e., theory, methodology, convention on terms, data)…
NASA Astrophysics Data System (ADS)
Vorontsov, Yurii I.
1994-01-01
The so-called standard quantum limits (SQL) of measurement errors of coordinate, momentum, amplitude of oscillations, energy, force etc. are due to back action of the meter on the system under test, whenever the meter responds to the coordinate of the system. These SQL are not fundamental and can be surmounted by various methods. In particular, in a coordinate measurement the SQL can be overcome by means of an appropriate correlation of conjugate meter variables. Conditions of quantum nonperturbing (nondemolition) and quasi-nonperturbing measurements of the energy of electromagnetic waves are discussed. Possible methods of these measurements are reviewed. Conditions for overcoming the SQL of wave energy measurement by the optical Kerr effect are analysed. The quantum limit of error of this measurement is discussed. The effects of dissipation, dispersion and generation of combination waves are considered. Results of experiments reported in the literature are discussed. The dependence of the quantum limit of detection of an external action upon a system on the initial state of the system is considered. The relation between the measurement error of an observable A and a perturbation of an observable B, when [A,B] is an operator, is examined.
Improved error separation technique for on-machine optical lens measurement
NASA Astrophysics Data System (ADS)
Fu, Xingyu; Bing, Guo; Zhao, Qingliang; Rao, Zhimin; Cheng, Kai; Mulenga, Kabwe
2016-04-01
This paper describes an improved error separation technique (EST) for on-machine surface profile measurement which can be applied to optical lenses on precision and ultra-precision machine tools. With only one precise probe and a linear stage, improved EST not only reduces measurement costs, but also shortens the sampling interval, which implies that this method can be used to measure the profile of small-bore lenses. The improved EST with stitching method can be applied to measure the profile of high-height lenses as well. Since the improvement is simple, most of the traditional EST can be modified by this method. The theoretical analysis and experimental results in this paper show that the improved EST eliminates the slide error successfully and generates an accurate lens profile.
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
ERIC Educational Resources Information Center
Harshman, Jordan; Yezierski, Ellen
2016-01-01
Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…
Errors in polarization measurements due to static retardation in photoelastic modulators
Modine, F.A.; Jellison, G.E. Jr. )
1993-03-01
A mathematical description of photoelastic polarization modulators is developed for the general case in which the modulator exhibits a static retardation that is not colinear with the dynamic retardation of the modulator. Simplifying approximations are introduced which are appropriate to practical use of the modulators in polarization measurements. Measurement errors due to the modulator static retardation along with procedures for their elimination are described for reflection ellipsometers, linear dichrometers, and polarimeters.
NASA Technical Reports Server (NTRS)
Merhav, S.; Velger, M.
1991-01-01
A method based on complementary filtering is shown to be effective in compensating for the image stabilization error due to sampling delays of HMD position and orientation measurements. These delays would otherwise have prevented the stabilization of the image in HMDs. The method is also shown to improve the resolution of the head orientation measurement, particularly at low frequencies, thus providing smoother head control commands, which are essential for precise head pointing and teleoperation.
The Measure of Human Error: Direct and Indirect Performance Shaping Factors
Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe
2007-08-01
The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Measurement error in two-stage analyses, with application to air pollution epidemiology
Szpiro, Adam A.; Paciorek, Christopher J.
2014-01-01
Summary Public health researchers often estimate health effects of exposures (e.g., pollution, diet, lifestyle) that cannot be directly measured for study subjects. A common strategy in environmental epidemiology is to use a first-stage (exposure) model to estimate the exposure based on covariates and/or spatio-temporal proximity and to use predictions from the exposure model as the covariate of interest in the second-stage (health) model. This induces a complex form of measurement error. We propose an analytical framework and methodology that is robust to misspecification of the first-stage model and provides valid inference for the second-stage model parameter of interest. We decompose the measurement error into components analogous to classical and Berkson error and characterize properties of the estimator in the second-stage model if the first-stage model predictions are plugged in without correction. Specifically, we derive conditions for compatibility between the first- and second-stage models that guarantee consistency (and have direct and important real-world design implications), and we derive an asymptotic estimate of finite-sample bias when the compatibility conditions are satisfied. We propose a methodology that (1) corrects for finite-sample bias and (2) correctly estimates standard errors. We demonstrate the utility of our methodology in simulations and an example from air pollution epidemiology. PMID:24764691
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
Measurement and simulation of clock errors from resource-constrained embedded systems
NASA Astrophysics Data System (ADS)
Collett, M. A.; Matthews, C. E.; Esward, T. J.; Whibberley, P. B.
2010-07-01
Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10-6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range.
Error reduction methods for integrated-path differential-absorption lidar measurements.
Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T
2012-07-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log". PMID:22772254
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
ERIC Educational Resources Information Center
Grantham, Marilyn H.
Some observers of political phenomena are referring to the 1990s as the "age of accountability." Early in the decade of the '90s, articles in periodicals, professional journals and other sources were voicing warnings about increasing public policymaker frustration with higher education and the spreading development and implementation of…
Accounting for Correlations across Measures of Perspective Taking.
ERIC Educational Resources Information Center
Rose, Samuel P.
This study examined the development of cognitive perspective taking skills and the lack of consistency across perspective taking measures in earlier studies. Four perspective taking measures were administered to 56 children between 4 and 10 years of age under two testing conditions. The high structure condition included multiple presentation of…
Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti
2016-01-01
Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472
Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti
2016-01-01
Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, Jorge
2016-03-01
The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.
Topping, David J.; Wright, Scott A.
2016-01-01
these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.
SANG-a kernel density estimator incorporating information about the measurement error
NASA Astrophysics Data System (ADS)
Hayes, Robert
Analyzing nominally large data sets having a measurement error unique to each entry is evaluated with a novel technique. This work begins with a review of modern analytical methodologies such as histograming data, ANOVA, regression (weighted and unweighted) along with various error propagation and estimation techniques. It is shown that by assuming the errors obey a functional distribution (such as normal or Poisson), a superposition of the assumed forms then provides the most comprehensive and informative graphical depiction of the data set's statistical information. The resultant approach is evaluated only for normally distributed errors so that the method is effectively a Superposition Analysis of Normalized Gaussians (SANG). SANG is shown to be easily calculated and highly informative in a single graph from what would otherwise require multiple analysis and figures to accomplish the same result. The work is demonstrated using historical radiochemistry measurements from a transuranic waste geological repository's environmental monitoring program. This work paid for under NRC-HQ-84-14-G-0059.
50 CFR 648.262 - Accountability measures for red crab limited access vessels.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...
50 CFR 648.262 - Accountability measures for red crab limited access vessels.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...
50 CFR 648.262 - Accountability measures for red crab limited access vessels.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Accountability measures for red crab... UNITED STATES Management Measures for the Atlantic Deep-Sea Red Crab Fishery § 648.262 Accountability measures for red crab limited access vessels. (a) Closure authority. NMFS shall close the EEZ to...
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Green, R. N.; Young, G. R.; Kelly, M. G.
1974-01-01
A method has previously been developed to satisfy terminal rendezvous and intermediate timing constraints for planetary missions involving orbital operations. The method uses impulse factoring in which a two-impulse transfer is divided into three or four impulses which add one or two intermediate orbits. The periods of the intermediate orbits and the number of revolutions in each orbit are varied to satisfy timing constraints. Techniques are developed to retarget the orbital transfer in the presence of orbit-determination and maneuver-execution errors. Sample results indicate that the nominal transfer can be retargeted with little change in either the magnitude (Delta V) or location of the individual impulses. Additonally, the total Delta V required for the retargeted transfer is little different from that required for the nominal transfer. A digital computer program developed to implement the techniques is described.
Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude
NASA Astrophysics Data System (ADS)
Liu, Bingyi; Feng, Changzhong; Liu, Zhishen
2014-11-01
For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.
Correction for dynamic bias error in transmission measurements of void fraction.
Andersson, P; Sundén, E Andersson; Svärd, S Jacobsson; Sjöstrand, H
2012-12-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision. PMID:23278029
Error Analysis and Measurement Uncertainty for a Fiber Grating Strain-Temperature Sensor
Tang, Jaw-Luen; Wang, Jian-Neng
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10−6 ε and 3.59 × 10−5 ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor. PMID:22163567
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
Correction for dynamic bias error in transmission measurements of void fraction
NASA Astrophysics Data System (ADS)
Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.
2012-12-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.
Farré, R; Rotger, M; Navajas, D
1997-03-01
The forced oscillation technique (FOT) allows the measurement of respiratory resistance (Rrs) and reactance (Xrs) and their associated coherence (gamma2). To avoid unreliable data, it is usual to reject Rrs and Xrs measurements with a gamma2 <0.95. This procedure makes it difficult to obtain acceptable data at the lowest frequencies of interest. The aim of this study was to derive expressions to compute the random error of Rrs and Xrs from gamma2 and the number (N) of data blocks involved in a FOT measurement. To this end, we developed theoretical equations for the variances and covariances of the pressure and flow auto- and cross-spectra used to compute Rrs and Xrs. Random errors of Rrs and Xrs were found to depend on the values of Rrs and Xrs, and to be proportional to ((1-gamma2)/(2 x N x gamma2))1/2. Reliable Rrs and Xrs data can be obtained in measurements with low gamma2 by enlarging the data recording (i.e. N). Therefore, the error equations derived may be useful to extend the frequency band of the forced oscillation technique to frequencies lower than usual, characterized by low coherence. PMID:9073006
50 CFR 640.28 - Annual catch limits (ACLs) and accountability measures (AMs).
Code of Federal Regulations, 2012 CFR
2012-10-01
..., NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE SPINY LOBSTER FISHERY OF THE GULF... accountability measures (AMs). For recreational and commercial spiny lobster landings combined, the ACL is...
Mooney, Stephen J.; Richards, Catherine A.; Rundle, Andrew G.
2015-01-01
BACKGROUND Multilevel studies of neighborhood impacts on health frequently aggregate individual-level data to create contextual measures. For example, percent of residents living in poverty and median household income are both aggregations of Census data on individual-level household income. Because household income is sensitive and complex, it is likely to be reported with error. METHODS To assess the impact of such error on effect estimates for neighborhood contextual factors, we conducted simulation studies to relate neighborhood measures derived from Census data to individual body mass index, varying the extent of non-differential misclassification/measurement error in the underlying Census data. We then explored the relationship between the form of variables chosen for neighborhood measure and outcome, modeling technique used, size and number of neighborhoods, and categorization of neighborhoods to the magnitude of bias. RESULTS For neighborhood contextual variables expressed as percentages (e.g. % of residents living in poverty), non-differential misclassification in the underlying individual-level Census data always biases the parameter estimate for the neighborhood variable away from the null. However, estimates of differences between quantiles of neighborhoods using such contextual variables are unbiased. Aggregation of the same underlying individual-level Census income data into a continuous variable, such as median household income, also introduces bias into the regression parameter. Such bias is non-negligible if the sampled groups are small. CONCLUSIONS Decisions regarding the construction and analysis of neighborhood contextual measures substantially alter the impact on study validity of measurement error in the data used to construct the contextual measure. PMID:24815303
Measurement error analysis of the 3D four-wheel aligner
NASA Astrophysics Data System (ADS)
Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun
2013-10-01
Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.
NASA Astrophysics Data System (ADS)
Roca, R.; Chambon, P.; jobard, I.; Viltard, N.
2012-04-01
Measuring rainfall requires a high density of observations, which, over the whole tropical elt, can only be provided from space. For several decades, the availability of satellite observations has greatly increased; thanks to newly implemented missions like the Megha-Tropiques mission and the forthcoming GPM constellation, measurements from space become available from a set of observing systems. In this work, we focus on rainfall error estimations at the 1 °/1-day accumulated scale, key scale of meteorological and hydrological studies. A novel methodology for quantitative precipitation estimation is introduced; its name is TAPEER (Tropical Amount of Precipitation with an Estimate of ERrors) and it aims to provide 1 °/1-day rain accumulations and associated errors over the whole Tropical belt. This approach is based on a combination of infrared imagery from a fleet of geostationary satellites and passive microwave derived rain rates from a constellation of low earth orbiting satellites. A three-stage disaggregation of error into sampling, algorithmic and calibration errors is performed; the magnitudes of the three terms are then estimated separately. A dedicated error model is used to evaluate sampling errors and a forward error propagation approach is used for an estimation of algorithmic and calibration errors. One of the main findings in this study is the large contribution of the sampling errors and the algorithmic errors of BRAIN on medium rain rates (2 mm h-1 to 10 mm h-1) in the total error budget.
Measuring The Flux of Nitrogen From Watersheds, Errors and The Temporal Resolution Problem
NASA Astrophysics Data System (ADS)
Showers, W. J.
2003-12-01
Agricultural and urban land use has increased the fluxes of nutrients and sediments into surface waters and ground waters. Transport of nitrogen through watersheds into coastal waters has resulted in eutrophication and water quality degradation. Management actions aimed at reducing nitrate contamination of surface waters need a better understanding of fundamental processes that control water quality on a watershed scale. RiverNet, a high resolution (hourly) in situ nitrate monitoring program has found significant concentration variations associated with point sources in the Neuse River Basin, NC. Nutrient inputs from agricultural watersheds are highly correlated to discharge. Nutrient inputs from waste application fields are variable and most important during falling hydrographs. Nitrogen, carbon and oxygen isotopes of nitrate and POM indicate that in-stream nutrient consumption is not an important process in riverine nutrient transport to the estuary in the river mainstem, but can be important in smaller streams and creeks. This water depth/nitrogen in-stream loss data indicates that the contribution of point sources have been underestimated on a watershed scale. In addition large fluxes of nitrate from contaminated groundwater to surface waters adjacent to WAF occur over a 1 to 3 day period after large rain events. The flux of contaminated groundwater has not been taken into account in the NPDES permitting process. 17O of river and groundwater nitrate indicates that atmospheric deposition can be a significant contributor (90 %) to nitrate flux in urban watersheds and to a lesser extent in forested watersheds, but contributes less than 10% of the nitrate flux exported from the basin as a whole. Nitrate fluxes calculated from hourly measurements differ from daily calculated fluxes by up to 20% during high flow conditions and 80% during low flow conditions. These findings indicate that significant errors can be produced by monitoring programs that try to determine
Kim, S; McLeod, J H; Williams, C; Hepler, N
2000-01-01
The field of substance abuse prevention has neither an overarching conceptual framework nor a set of shared terminologies for establishing the accountability and performance outcome measures of substance abuse prevention services rendered. Hence, there is a wide gap between what we currently have as data on one hand and information that are required to meet the performance goals and accountability measures set by the Government Performance and Results Act of 1993 on the other. The task before us is: How can we establish the accountability and performance measures of substance abuse prevention programs and transform the field of prevention into prevention science? The intent of this volume is to serve that purpose and accelerate the processes of this transformation by identifying the requisite components of the transformation (i.e., theory, methodology, convention on terms, and data) and by introducing an open forum called, Prevention Validation and Accounting (PREVA) Platform. The entire PREVA Platform (for short, the Platform) is designed as an analytic framework, which is formulated by a collectivity of common concepts, terminologies, accounting units, protocols for counting the units, data elements, and operationalizations of various constructs, and other summary measures intended to bring about an efficient and effective measurement of process input, program capacity, process output, performance outcome, and societal impact of substance abuse prevention programs. The measurement units and summary data elements are designed to be measured across time and across jurisdictions, i.e., from local to regional to state to national levels. In the Platform, the process input is captured by two dimensions of time and capital. Time is conceptualized in terms of service delivery time and time spent for research and development. Capital is measured by the monies expended for the delivery of program activities during a fiscal or reporting period. Program capacity is captured
NASA Astrophysics Data System (ADS)
Bianconi, A.
A short summary of results of recent simulations of (un) polarized Drell-Yan experiments is presented here. Dilepton production in pp, bar {p}p, π-p and π+p scattering is considered, for several kinematics corresponding to interesting regions for experiments at GSI, CERN-Compass and RHIC. A table of integrated cross sections, and a set of estimated error bars on measurements of azimuthal asymmetries (associated with collection of 5, 20 or 80 Kevents) are reported.
Magnetic field error measurement of the CEBAF (NIST) wiggler using the pulsed wire method
Wallace, Stephen; Colson, William; Neil, George; Harwood, Leigh
1993-07-01
The National Institute for Science and Technology (NIST) wiggler has been loaded to the Continuous Electron Beam Accelerator Facility (CEBAF). The pulsed wire method [R.W. Warren, Nucl. Instr. and Meth. A272 (1988) 267] has been used to measure the field errors of the entrance wiggler half, and the net path deflection was calculated to be Δx ≈ 5.2 m.
Xiaoqing, Cheng; Lixin, Yi; Lingling, Liu; Guoqiang, Tang; Zhidong, Wang
2015-11-01
RaDeCC has proved to be a precise and standard way to measure (224)Ra and (223)Ra in water samples and successfully made radium a tracer of several environmental processes. In this paper, the relative errors of (224)Ra and (223)Ra measurement in water samples via a Radium Delayed Coincidence Count system are analyzed through performing coincidence correction calculations and error propagation. The calculated relative errors range of 2.6% ∼ 10.6% for (224)Ra and 9.6% ∼ 14.2% for (223)Ra. For different radium activities, effects of decay days and counting time on final radium relative errors are evaluated and the results show that these relative errors can decrease by adjusting the two measurement factors. Finally, to minimize propagated errors in Radium activity, a set of optimized RaDeCC measurement parameters are proposed. PMID:26233651
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements
NASA Astrophysics Data System (ADS)
Kappel, David; Haus, Rainer; Arnold, Gabriele
2015-08-01
Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0 - 5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency windows located at 1.02, 1.10, and 1.18 μm. To obtain satisfactory fits to measured spectra, the retrieval pipeline also determines auxiliary parameters describing cloud properties from a certain spectral range. But spectral information content is limited, and emissivity is difficult to retrieve due to strong interferences from other parameters. Based on a selection of representative synthetic VIRTIS-M-IR spectra in the range 1.0 - 2.3 μm, this paper investigates emissivity retrieval errors that can be caused by interferences of atmospheric and surface parameters, by measurement noise, and by a priori data, and which retrieval pipeline leads to minimal errors. Retrieval of emissivity from a single spectrum is shown to fail due to extremely large errors, although the fits to the reference spectra are very good. Neglecting geologic activity, it is suggested to apply a multi-spectrum retrieval technique to retrieve emissivity relative to an initial value as a parameter that is common to several measured spectra that cover the same surface bin. Retrieved emissivity maps of targets with limited extension (a few thousand km) are then additively renormalized to remove spatially large scale deviations from the true emissivity map that are due to spatially slowly varying interfering parameters. Corresponding multi-spectrum retrieval errors are estimated by a statistical scaling of the single-spectrum retrieval errors and are listed for 25 measurement repetitions. For the best of the
NASA Astrophysics Data System (ADS)
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Zhou, Tong; Kuang, Ye
2016-08-01
A morphological filter is proposed to obtain a high-accuracy roundness measurement based on the four-parameter roundness measurement model, which takes into account eccentricity, probe offset, probe tip head radius and tilt error. This paper analyses the sample angle deviations caused by the four systematic errors to design a morphological filter based on the distribution of the sample angle. The effectiveness of the proposed method is verified through simulations and experiments performed with a roundness measuring machine. Compared to the morphological filter with the uniform sample angle, the accuracy of the roundness measurement can be increased by approximately 0.09 μm using the morphological filter with a non-uniform sample angle based on the four-parameter roundness measurement model, when eccentricity is above 16 μm, probe offset is approximately 1000 μm, tilt error is approximately 1″, the probe tip head radius is 1 mm and the cylindrical component radius is approximately 37 mm. The accuracy and reliability of roundness measurements are improved by using the proposed method for cylindrical components with a small radius, especially if the eccentricity and probe offset are large, and the tilt error and probe tip head radius are small. The proposed morphological filter method can be used for precision and ultra-precision roundness measurements, especially for functional assessments of roundness profiles.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Sensitivity of Force Specifications to the Errors in Measuring the Interface Force
NASA Technical Reports Server (NTRS)
Worth, Daniel
1999-01-01
Force-Limited Random Vibration Testing has been applied in the last several years at NASA/GSFC for various programs at the instrument and system level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the operational environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. A key element in the ability to perform force-limited testing is multi-component force gauges. This paper will show how some measurement and calibration errors in force gauges are compensated for w en tie force specification is calculated. The resulting notches in the acceleration spectrum, when a random vibration test is performed, are the same as the notches produced during an uncompensated test that has no measurement errors. The paper will also present the results of tests that were used to validate this compensation. Knowing that the force specification can compensate for some measurement errors allows tests to continue after force gauge failures or allows dummy gauges to be used in places that are inaccessible.
Regression calibration method for correcting measurement-error bias in nutritional epidemiology.
Spiegelman, D; McDermott, A; Rosner, B
1997-04-01
Regression calibration is a statistical method for adjusting point and interval estimates of effect obtained from regression models commonly used in epidemiology for bias due to measurement error in assessing nutrients or other variables. Previous work developed regression calibration for use in estimating odds ratios from logistic regression. We extend this here to estimating incidence rate ratios from Cox proportional hazards models and regression slopes from linear-regression models. Regression calibration is appropriate when a gold standard is available in a validation study and a linear measurement error with constant variance applies or when replicate measurements are available in a reliability study and linear random within-person error can be assumed. In this paper, the method is illustrated by correction of rate ratios describing the relations between the incidence of breast cancer and dietary intakes of vitamin A, alcohol, and total energy in the Nurses' Health Study. An example using linear regression is based on estimation of the relation between ultradistal radius bone density and dietary intakes of caffeine, calcium, and total energy in the Massachusetts Women's Health Study. Software implementing these methods uses SAS macros. PMID:9094918
A Kernel-based Account of Bibliometric Measures
NASA Astrophysics Data System (ADS)
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
NASA Astrophysics Data System (ADS)
Xiang, Rong
2014-09-01
This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Hao, Jiangang; Koester, Benjamin P.; Mckay, Timothy A.; Rykoff, Eli S.; Rozo, Eduardo; Evrard, August; Annis, James; Becker, Matthew; Busha, Michael; Gerdes, David; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
An examination of errors in characteristic curve measurements of radiographic screen/film systems.
Wagner, L K; Barnes, G T; Bencomo, J A; Haus, A G
1983-01-01
The precision and accuracy achieved in the measurement of characteristic curves for radiographic screen/film systems is quantitatively investigated for three techniques: inverse square, kVp bootstrap, and step-wedge bootstrap. Precision of all techniques is generally better than +/- 1.5% while the agreement among all intensity-scale techniques is better than 2% over the useful exposure latitude. However, the accuracy of the sensitometry will depend on several factors, including linearity and energy dependence of the calibration instrument, that may introduce larger errors. Comparisons of time-scale and intensity-scale methods are made and a means of measuring reciprocity law failure is demonstrated. PMID:6877185
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.
Digitally modulated bit error rate measurement system for microwave component evaluation
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.; Budinger, James M.
1989-01-01
The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.
McGlothlin, Anna; Stamey, James D; Seaman, John W
2008-02-01
We consider a Bayesian analysis for modeling a binary response that is subject to misclassification. Additionally, an explanatory variable is assumed to be unobservable, but measurements are available on its surrogate. A binary regression model is developed to incorporate the measurement error in the covariate as well as the misclassification in the response. Unlike existing methods, no model parameters need be assumed known. Markov chain Monte Carlo methods are utilized to perform the necessary computations. The methods developed are illustrated using atomic bomb survival data. A simulation experiment explores advantages of the approach. PMID:18283683
A method to account for the temperature sensitivity of TCCON total column measurements
NASA Astrophysics Data System (ADS)
Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.
2014-05-01
The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References
50 CFR 648.73 - Surfclam and ocean quahog Accountability Measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Surfclam and ocean quahog Accountability... Management Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.73 Surfclam and ocean quahog Accountability Measures. (a) Commercial ITQ fishery. (1) If the ACL for surfclam or ocean quahog is exceeded,...
50 CFR 648.73 - Surfclam and ocean quahog Accountability Measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Surfclam and ocean quahog Accountability... Management Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.73 Surfclam and ocean quahog Accountability Measures. (a) Commercial ITQ fishery. (1) If the ACL for surfclam or ocean quahog is exceeded,...
50 CFR 648.73 - Surfclam and ocean quahog Accountability Measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Surfclam and ocean quahog Accountability... Management Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.73 Surfclam and ocean quahog Accountability Measures. (a) Commercial ITQ fishery. (1) If the ACL for surfclam or ocean quahog is exceeded,...
48 CFR 9904.412 - Cost accounting standard for composition and measurement of pension cost.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Cost accounting standard for composition and measurement of pension cost. 9904.412 Section 9904.412 Federal Acquisition... accounting standard for composition and measurement of pension cost....
48 CFR 9904.412 - Cost accounting standard for composition and measurement of pension cost.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Cost accounting standard for composition and measurement of pension cost. 9904.412 Section 9904.412 Federal Acquisition... accounting standard for composition and measurement of pension cost....
48 CFR 9904.412 - Cost accounting standard for composition and measurement of pension cost.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Cost accounting standard for composition and measurement of pension cost. 9904.412 Section 9904.412 Federal Acquisition... accounting standard for composition and measurement of pension cost....
48 CFR 9904.412 - Cost accounting standard for composition and measurement of pension cost.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Cost accounting standard for composition and measurement of pension cost. 9904.412 Section 9904.412 Federal Acquisition... accounting standard for composition and measurement of pension cost....
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
ERIC Educational Resources Information Center
Worts, Diana; Sacker, Amanda; McDonough, Peggy
2010-01-01
This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
NASA Astrophysics Data System (ADS)
Holler, Mirko; Raabe, Jörg
2015-05-01
The nonaxial interferometric position measurement of rotating objects can be performed by imaging the laser beam of the interferometer to a rotating mirror which can be a sphere or a cylinder. This, however, requires such rotating mirrors to be centered on the axis of rotation as a wobble would result in loss of the interference signal. We present a tracking-type interferometer that performs such measurement in a general case where the rotating mirror may wobble on the axis of rotation, or even where the axis of rotation may be translating in space. Aside from tracking, meaning to measure and follow the position of the rotating mirror, the interferometric measurement errors induced by the tracking motion of the interferometer itself are optically compensated, preserving nanometric measurement accuracy. As an example, we show the application of this interferometer in a scanning x-ray tomography instrument.
The effect of clock, media, and station location errors on Doppler measurement accuracy
NASA Technical Reports Server (NTRS)
Miller, J. K.
1993-01-01
Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.
A study of GPS measurement errors due to noise and multipath interference for CGADS
NASA Technical Reports Server (NTRS)
Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.
1996-01-01
This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.
Strain gage measurement errors in the transient heating of structural components
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1993-01-01
Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.