SEQUENTIAL TESTING OF MEASUREMENT ERRORS IN INTER-RATER RELIABILITY STUDIES
Jin, Mei; Liu, Aiyi; Chen, Zhen; Li, Zhaohai
2014-01-01
Inter-rater reliability is usually assessed by means of the intraclass correlation coefficient. Using two-way analysis of variance to model raters and subjects as random effects, we derive group sequential testing procedures for the design and analysis of reliability studies in which multiple raters evaluate multiple subjects. Compared with the conventional fixed sample procedures, the group sequential test has smaller average sample number. The performance of the proposed technique is examined using simulation studies and critical values are tabulated for a range of two-stage design parameters. The methods are exemplified using data from the Physician Reliability Study for diagnosis of endometriosis. PMID:25525316
ERIC Educational Resources Information Center
Kachchaf, Rachel; Solano-Flores, Guillermo
2012-01-01
We examined how rater language background affects the scoring of short-answer, open-ended test items in the assessment of English language learners (ELLs). Four native English and four native Spanish-speaking certified bilingual teachers scored 107 responses of fourth- and fifth-grade Spanish-speaking ELLs to mathematics items administered in…
How Good Are Our Raters? Rater Errors in Clinical Skills Assessment
ERIC Educational Resources Information Center
Iramaneerat, Cherdsak; Yudkowsky, Rachel
2006-01-01
A multi-faceted Rasch measurement (MFRM) model was used to analyze a clinical skills assessment of 173 fourth-year medical students in a Midwestern medical school to investigate four types of rater errors: leniency, inconsistency, halo, and restriction of range. Each student performed six clinical tasks with six standardized patients (SPs), who…
Rater Stringency Error in Performance Rating: A Contrast of Three Models.
ERIC Educational Resources Information Center
Cason, Gerald J.; Cason, Carolyn L.
The use of three remedies for errors in the measurement of ability that arise from differences in rater stringency is discussed. Models contrasted are: (1) Conventional; (2) Handicap; and (3) deterministic Rater Response Theory (RRT). General model requirements, power, bias of measures, computing cost, and complexity are contrasted. Contrasts are…
ERIC Educational Resources Information Center
Raymond, Mark R.; Harik, Polina; Clauser, Brian E.
2011-01-01
Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…
ERIC Educational Resources Information Center
Sheehan, Dwayne P.; Lafave, Mark R.; Katz, Larry
2011-01-01
This study was designed to test the intra- and inter-rater reliability of the University of North Carolina's Balance Error Scoring System in 9- and 10-year-old children. Additionally, a modified version of the Balance Error Scoring System was tested to determine if it was more sensitive in this population ("raw scores"). Forty-six normally…
ERIC Educational Resources Information Center
Cason, Gerald J.; And Others
Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…
Agreement Measure Comparisons between Two Independent Sets of Raters.
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W., Jr.
1997-01-01
Describes a FORTRAN software program that calculates the probability of an observed difference between agreement measures obtained from two independent sets of raters. An example illustrates the use of the DIFFER program in evaluating undergraduate essays. (Author/SLD)
Lang, W Steve; Wilkerson, Judy R; Rea, Dorothy C; Quinn, David; Batchelder, Heather L; Englehart, Dierdre S; Jennings, Kelly J
2014-01-01
The purpose of this study was to examine the extent to which raters' subjectivity impacts measures of teacher dispositions using the Dispositions Assessments Aligned with Teacher Standards (DAATS) battery. This is an important component of the collection of evidence of validity and reliability of inferences made using the scale. It also provides needed support for the use of subjective affective measures in teacher training and other professional preparation programs, since these measures are often feared to be unreliable because of rater effect. It demonstrates the advantages of using the Multi-Faceted Rasch Model as a better alternative to the typical methods used in preparation programs, such as Cohen's Kappa. DAATS instruments require subjective scoring using a six-point rating scale derived from the affective taxonomy as defined by Krathwohl, Bloom, and Masia (1956). Rater effect is a serious challenge and can worsen or drift over time. Errors in rater judgment can impact the accuracy of ratings, and these effects are common, but can be lessened through training of raters and monitoring of their efforts. This effort uses the multifaceted Rasch measurement models (MFRM) to detect and understand the nature of these effects. PMID:24992248
On Rater Agreement and Rater Training
ERIC Educational Resources Information Center
Wang, Binhong
2010-01-01
This paper first analyzed two studies on rater factors and rating criteria to raise the problem of rater agreement. After that the author reveals the causes of discrepencies in rating administration by discussing rater variability and rater bias. The author argues that rater bias can not be eliminated completely, we can only reduce the error to a…
Rater Bias and the Measurement of Support Needs
ERIC Educational Resources Information Center
Guscia, Roma; Harries, Julia; Kirby, Neil; Nettelbeck, Ted
2006-01-01
Background: The development and use of support need instruments for funding disability services is a relatively recent initiative. Although the use of these measures appears at face value to provide an objective measure of support needs, little is known about their psychometric properties, particularly with respect to rater bias and purpose of…
Measuring the Joint Agreement between Multiple Raters and a Standard.
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W., Jr.
1997-01-01
A FORTRAN subroutine is presented to calculate a generalized measure of agreement between multiple raters and a set of correct responses at any level of measurement and among multiple responses, along with the associated probability value, under the null hypothesis. (Author)
Measuring Essay Assessment: Intra-Rater and Inter-Rater Reliability
ERIC Educational Resources Information Center
Kayapinar, Ulas
2014-01-01
Problem Statement: There have been many attempts to research the effective assessment of writing ability, and many proposals for how this might be done. In this sense, rater reliability plays a crucial role for making vital decisions about testees in different turning points of both educational and professional life. Intra-rater and inter-rater…
Approximate measurement invariance in cross-classified rater-mediated assessments
Kelcey, Ben; McGinn, Dan; Hill, Heather
2014-01-01
An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity. PMID:25566145
Approximate measurement invariance in cross-classified rater-mediated assessments.
Kelcey, Ben; McGinn, Dan; Hill, Heather
2014-01-01
An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity. PMID:25566145
Yoo, Won-Gyu
2016-07-01
[Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum.
Yoo, Won-gyu
2016-01-01
[Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum. PMID:27512269
Yoo, Won-Gyu
2016-07-01
[Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum. PMID:27512269
Gyagenda, Ismail S; Engelhard, George
2009-01-01
This study i) examined the rater, domain, and gender influences on the assessed quality of student's writing ability and ii) described and compared different approaches for examining these influences based on classical and modern measurement theories. Twenty raters were randomly selected from a group of 87 trained raters contracted to rate essays of the annual Georgia High School Writing Test. Each rater scored the entire set of 375 essays on a 1-4 rating scale (366 essays were used in the analyses because nine cases had missing values and were dropped). Two approaches, the classical approach and the item response theory-based Rasch model, were used to conduct psychometric measures of reliability and inter-rater reliability, and statistical analyses with rater and gender as the predictor variables and the total and domain scores as the dependent variables. To achieve the second purpose, the Classical Test Model and the Rasch model were compared and contrasted and their strengths and limitations discussed as they related to student writing assessment. Analyses from both approaches indicated statistically significant rater and gender effects on student writing. Using domain scores as the dependent variables, there was a statistically significant rater by gender interaction effect at the multivariate level, but not at the univariate level. The Rasch analysis indicated a statistically significant rater by gender effect. The comparison between the two approaches highlighted their strengths and limitations, their different measurement and statistical models, and their different procedures.
Analysis of Rater Severity on Written Expression Exam Using Many Faceted Rasch Measurement
ERIC Educational Resources Information Center
Prieto, Gerardo; Nieto, Eloísa
2014-01-01
This paper describes how a Many Faceted Rasch Measurement (MFRM) approach can be applied to performance assessment focusing on rater analysis. The article provides an introduction to MFRM, a description of MFRM analysis procedures, and an example to illustrate how to examine the effects of various sources of variability on test takers'…
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Measuring Rater Reliability on a Special Education Observation Tool
ERIC Educational Resources Information Center
Semmelroth, Carrie Lisa; Johnson, Evelyn
2014-01-01
This study used generalizability theory to measure reliability on the Recognizing Effective Special Education Teachers (RESET) observation tool designed to evaluate special education teacher effectiveness. At the time of this study, the RESET tool included three evidence-based instructional practices (direct, explicit instruction; whole-group…
ERIC Educational Resources Information Center
Johnson, David; VanBrackle, Lewis
2012-01-01
Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…
Awatani, Takenori; Mori, Seigo; Shinohara, Junji; Koshiba, Hiroya; Nariai, Miki; Tatsumi, Yasutaka; Nagata, Akinori; Morikita, Ikuhiro
2016-01-01
[Purpose] The purpose of present study was to establish the same-session and between-day intra-rater reliability of measurements of extensor strength in the maximum abducted position (MABP) using hand-held dynamometer (HHD). [Subjects] Thirteen healthy volunteers (10 male, 3 female; mean ± SD: age 19.8 ± 0.8 y) participated in the study. [Methods] Participants in the prone position with maximum abduction of shoulder were instructed to hold the contraction against the ground reaction force, and peak isometric force was recorded using the HHD on the floor. Participants performed maximum isometric contractions lasting 3 s, with 3 trials in one session. Between-day measurements were performed in 2 sessions separated by a 1-week interval. Intra-rater reliability was determined using intraclass correlation coefficients (ICC). Systematic errors were assessed using Bland-Altman analysis for between-day data. [Results] ICC values for same-session data and between-day data were found to be “almost perfect”. Systematic errors not existed and only random error existed. [Conclusion] The measurement method used in this study can easily control for experimental conditions and allow precise measurement because the lack of stabilization and the impact of tester strength are removed. Thus, extensor strength in MABP measurement is beneficial for muscle strength assessment. PMID:27134388
Podsakoff, Nathan P; Whiting, Steven W; Welsh, David T; Mai, Ke Michael
2013-09-01
Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixed-effects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research.
Podsakoff, Nathan P; Whiting, Steven W; Welsh, David T; Mai, Ke Michael
2013-09-01
Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixed-effects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research. PMID:23565897
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
Quantile Regression With Measurement Error
Wei, Ying; Carroll, Raymond J.
2010-01-01
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802
The Effects of Rater Training on Inter-Rater Agreement
ERIC Educational Resources Information Center
Pufpaff, Lisa A.; Clarke, Laura; Jones, Ruth E.
2015-01-01
This paper addresses the effects of rater training on the rubric-based scoring of three preservice teacher candidate performance assessments. This project sought to evaluate the consistency of ratings assigned to student learning outcome measures being used for program accreditation and to explore the need for rater training in order to increase…
Sršen, Katja Groleger; Vidmar, Gaj; Pikl, Maša; Vrečar, Irena; Burja, Cirila; Krušec, Klavdija
2012-06-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine its content validity and inter-rater reliability. Fifty-four healthy children, 3.5-11 years old, from a mainstream swimming program participated in a content validity study. They were evaluated with SWIM and the national evaluation system of swimming abilities (classifying children into seven categories). To study the inter-rater reliability of SWIM, we included 37 children and youth from a Halliwick swimming program, aged 7-22 years, who were evaluated by two Halliwick instructors independently. The average SWIM score differed between national evaluation system categories and followed the expected order (P<0.001), whereby a ceiling effect was observed in the higher categories. High inter-rater reliability was found for all 11 SWIM items. The lowest reliability was observed for item G (sagittal rotation), although the estimates were still above 0.9. As expected, the highest reliability was observed for the total score (intraclass correlation 0.996). The validity of SWIM with respect to the national evaluation system of swimming abilities is high until the point where a swimmer is well adapted to water and already able to learn some swimming techniques. The inter-rater reliability of SWIM is very high; thus, we believe that SWIM can be used in further research and practice to follow the progress of swimmers.
Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.
2014-01-01
Background Concurrent validity and intra-rater reliability using a customized Android phone application to measure cervical-spine range-of-motion (ROM) has not been previously validated against a gold-standard three-dimensional motion analysis (3DMA) system. Findings Twenty-one healthy individuals (age:31 ± 9.1 years, male:11) participated, with 16 re-examined for intra-rater reliability 1–7 days later. An Android phone was fixed on a helmet, which was then securely fastened on the participant’s head. Cervical-spine ROM in flexion, extension, lateral flexion and rotation were performed in sitting with concurrent measurements obtained from both a 3DMA system and the phone. The phone demonstrated moderate to excellent (ICC = 0.53-0.98, Spearman ρ = 0.52-0.98) concurrent validity for ROM measurements in cervical flexion, extension, lateral-flexion and rotation. However, cervical rotation demonstrated both proportional and fixed bias. Excellent intra-rater reliability was demonstrated for cervical flexion, extension and lateral flexion (ICC = 0.82-0.90), but poor for right- and left-rotation (ICC = 0.05-0.33) using the phone. Possible reasons for the outcome are that flexion, extension and lateral-flexion measurements are detected by gravity-dependent accelerometers while rotation measurements are detected by the magnetometer which can be adversely affected by surrounding magnetic fields. Conclusion The results of this study demonstrate that the tested Android phone application is valid and reliable to measure ROM of the cervical-spine in flexion, extension and lateral-flexion but not in rotation likely due to magnetic interference. The clinical implication of this study is that therapists should be mindful of the plane of measurement when using the Android phone to measure ROM of the cervical-spine. PMID:24742001
Errors in airborne flux measurements
NASA Astrophysics Data System (ADS)
Mann, Jakob; Lenschow, Donald H.
1994-07-01
We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.
Measuring Test Measurement Error: A General Approach
ERIC Educational Resources Information Center
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2013-01-01
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Wolfe, E W; Moulder, B C; Myford, C M
2001-01-01
This paper describes a class of rater effects that depict rater-by-time interactions. We refer to this class of rater effects as DRIFT differential rater functioning over time. This article describes several types of DRIFT (primacy/recency, differential centrality/extremism, and practice/fatigue) and Rasch measurement procedures designed to identify these types of DRIFT in rating data. These procedures are applied to simulated data and are shown to be useful in classifying raters as being aberrant or non-aberrant for primacy, recency, and differential centrality and extremism, particularly for moderate or larger effect sizes. Rates of correct classification for practice and fatigue were lower and statistical power exceeded.50 only with very large effect sizes. Type I error rates (i.e., incorrect nomination) were near expected levels in all cases.
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-06-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
Improved Error Thresholds for Measurement-Free Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Correlated measurement error hampers association network inference.
Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B
2014-09-01
Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
ERIC Educational Resources Information Center
Douglas, Scott Roy
2015-01-01
Independent confirmation that vocabulary in use unfolds across levels of performance as expected can contribute to a more complete understanding of validity in standardized English language tests. This study examined the relationship between Lexical Frequency Profiling (LFP) measures and rater judgements of test-takers' overall levels of…
Errors and Uncertainty in Physics Measurement.
ERIC Educational Resources Information Center
Blasiak, Wladyslaw
1983-01-01
Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…
Error-compensation measurements on polarization qubits
NASA Astrophysics Data System (ADS)
Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can
2016-06-01
Systematic errors are inevitable in most measurements performed in real life because of imperfect measurement devices. Reducing systematic errors is crucial to ensuring the accuracy and reliability of measurement results. To this end, delicate error-compensation design is often necessary in addition to device calibration to reduce the dependence of the systematic error on the imperfection of the devices. The art of error-compensation design is well appreciated in nuclear magnetic resonance system by using composite pulses. In contrast, there are few works on reducing systematic errors in quantum optical systems. Here we propose an error-compensation design to reduce the systematic error in projective measurements on a polarization qubit. It can reduce the systematic error to the second order of the phase errors of both the half-wave plate (HWP) and the quarter-wave plate (QWP) as well as the angle error of the HWP. This technique is then applied to experiments on quantum state tomography on polarization qubits, leading to a 20-fold reduction in the systematic error. Our study may find applications in high-precision tasks in polarization optics and quantum optics.
Error margin for antenna gain measurements
NASA Technical Reports Server (NTRS)
Cable, V.
2002-01-01
The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.
Error latency measurements in symbolic architectures
NASA Technical Reports Server (NTRS)
Young, L. T.; Iyer, R. K.
1991-01-01
Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.
Inter-rater reliability of a video-analysis method measuring low-back load in a field situation.
Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Bongers, Paulien M; van Dieën, Jaap H
2013-09-01
Valid and reliable low-back load assessment tools that can be used in field situations are needed for epidemiologic studies and for ergonomic practice. The aim of this study was to assess the inter-rater reliability of a low-back load video-analysis method in a field setting. Five raters analyzed 50 work site manual material handling tasks of 14 workers. Peak and mean moments at the level of L5S1, and segment angles were obtained using the video-analysis method. Intra-class correlation coefficients (ICCs) and median standard deviations across raters were calculated. ICCs revealed excellent inter-rater reliability (>0.9) for peak and mean moments, ICCs of segment angles were variable. Median standard deviations showed relatively small inter-rater variance for moments (standard deviation <10 Nm) and segment angle variation ranging from 0° to 20°. The proposed video-analysis method, provides a reliable tool for obtaining low-back loads from occupational field tasks.
Prediction with measurement errors in finite populations
Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San
2011-01-01
We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors. PMID:22162621
Schless, Simon-Henri; Desloovere, Kaat; Aertbeliën, Erwin; Molenaers, Guy; Huenaerts, Catherine; Bar-On, Lynn
2015-01-01
Aim Despite the impact of spasticity, there is a lack of objective, clinically reliable and valid tools for its assessment. This study aims to evaluate the reliability of various performance- and spasticity-related parameters collected with a manually controlled instrumented spasticity assessment in four lower limb muscles in children with cerebral palsy (CP). Method The lateral gastrocnemius, medial hamstrings, rectus femoris and hip adductors of 12 children with spastic CP (12.8 years, ±4.13 years, bilateral/unilateral involvement n=7/5) were passively stretched in the sagittal plane at incremental velocities. Muscle activity, joint motion, and torque were synchronously recorded using electromyography, inertial sensors, and a force/torque load-cell. Reliability was assessed on three levels: (1) intra- and (2) inter-rater within session, and (3) intra-rater between session. Results Parameters were found to be reliable in all three analyses, with 90% containing intra-class correlation coefficients >0.6, and 70% of standard error of measurement values <20% of the mean values. The most reliable analysis was intra-rater within session, followed by intra-rater between session, and then inter-rater within session. The Adds evaluation had a slightly lower level of reliability than that of the other muscles. Conclusions Limited intrinsic/extrinsic errors were introduced by repeated stretch repetitions. The parameters were more reliable when the same rater, rather than different raters performed the evaluation. Standardisation and training should be further improved to reduce extrinsic error when different raters perform the measurement. Errors were also muscle specific, or related to the measurement set-up. They need to be accounted for, in particular when assessing pre-post interventions or longitudinal follow-up. The parameters of the instrumented spasticity assessment demonstrate a wide range of applications for both research and clinical environments in the
ERIC Educational Resources Information Center
Kahraman, Nilufer; Brown, Crystal B.
2015-01-01
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
Rater Cognition: Implications for Validity
ERIC Educational Resources Information Center
Bejar, Issac I.
2012-01-01
The scoring process is critical in the validation of tests that rely on constructed responses. Documenting that readers carry out the scoring in ways consistent with the construct and measurement goals is an important aspect of score validity. In this article, rater cognition is approached as a source of support for a validity argument for scores…
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
Protecting weak measurements against systematic errors
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-07-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.
Honing in on the Social Phenotype in Williams Syndrome Using Multiple Measures and Multiple Raters
ERIC Educational Resources Information Center
Klein-Tasman, Bonita P.; Li-Barber, Kirsten T.; Magargee, Erin T.
2011-01-01
The behavioral phenotype of Williams syndrome (WS) is characterized by difficulties with establishment and maintenance of friendships despite high levels of interest in social interaction. Here, parents and teachers rated 84 children with WS ages 4-16 years using two commonly-used measures assessing aspects of social functioning: the Social Skills…
Laplace approximation in measurement error models.
Battauz, Michela
2011-05-01
Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
Honing in on the social phenotype in Williams syndrome using multiple measures and multiple raters.
Klein-Tasman, Bonita P; Li-Barber, Kirsten T; Magargee, Erin T
2011-03-01
The behavioral phenotype of Williams syndrome (WS) is characterized by difficulties with establishment and maintenance of friendships despite high levels of interest in social interaction. Here, parents and teachers rated 84 children with WS ages 4-16 years using two commonly-used measures assessing aspects of social functioning: the Social Skills Rating System and the Social Responsiveness Scale. Mean prosocial functioning fell in the low average to average range, whereas social reciprocity was perceived to be an area of significant difficulty for many children. Concordance between parent and teacher ratings was high. Patterns of social functioning are discussed. Findings highlight the importance of parsing the construct of social skills to gain a nuanced understanding of the social phenotype in WS.
Reducing Measurement Error in Student Achievement Estimation
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2008-01-01
The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…
Cooperstein, Robert; Young, Morgan; Haneline, Michael
2013-01-01
Introduction: Motion palpators usually rate the movement of each spinal level palpated, and their reliability is assessed based upon discrete paired observations. We hypothesized that asking motion palpators to identify the most fixated cervical spinal level to allow calculating reliability at the group level might be a useful alternative approach. Methods: Three examiners palpated 29 asymptomatic supine participants for cervical joint hypomobility. The location of identified hypomobile sites was based on their distance from the T1 spinous process. Interexaminer concordance was estimated by calculating Intraclass Correlation Coefficient (ICC) and mean absolute differences (MAD) values, stratified by degree of examiner confidence. Results: For the entire participant pool, ICC [2,1] = 0.61, judged “good.” MAD=1.35 cm, corresponding to mean interexaminer differences of about 75% of one cervical vertebral level. Stratification by examiner confidence levels resulted in small subgroups with equivocal results. Discussion and Conclusion: A continuous measures study methodology for assessing cervical motion palpation reliability showed more examiner concordance than was usually the case in previous studies using discrete methodology. PMID:23754861
Measurement error analysis of taxi meter
NASA Astrophysics Data System (ADS)
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
Neutron multiplication error in TRU waste measurements
Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Conditional Density Estimation in Measurement Error Problems.
Wang, Xiao-Feng; Ye, Deping
2015-01-01
This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902
Measurement error in human dental mensuration.
Kieser, J A; Groeneveld, H T; McKee, J; Cameron, N
1990-01-01
The reliability of human odontometric data was evaluated in a sample of 60 teeth. Three observers, using their own instruments and the same definition of the mesiodistal and buccolingual dimensions were asked to repeat their measurements after 2 months. Precision, or repeatability, was analysed by means of Pearsonian correlation coefficients and mean absolute error values. Accuracy, or the absence of bias, was evaluated by means of Bland-Altman procedures and attendant Student t-tests, and also by an ANOVA procedure. The present investigation suggests that odontometric data have a high interobserver error component. Mesiodistal dimensions show greater imprecision and bias than buccolingual measurements. The results of the ANOVA suggest that bias is the result of interobserver error and is not due to the time between repeated measurements.
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
ERIC Educational Resources Information Center
Wolfe, Edward W.; Moulder, Bradley C.; Myford, Carol M.
2001-01-01
Describes a class of rater effects, differential rater functioning over time (DRIFT), that depicts rater-by-time interactions. Also describes Rasch measurement procedures designed to identify these types of DRIFT in rating data. Applied these procedures to simulated data to show their usefulness in classifying raters as aberrant or non-aberrant…
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J
2014-11-10
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Algorithmic Error Correction of Impedance Measuring Sensors
Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira
2009-01-01
This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Improving Localization Accuracy: Successive Measurements Error Modeling.
Ali, Najah Abu; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle's future position and its past positions, and then propose a -order Gauss-Markov model to predict the future position of a vehicle from its past positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss-Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle's future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Body shape preferences: associations with rater body shape and sociosexuality.
Price, Michael E; Pound, Nicholas; Dunn, James; Hopkins, Sian; Kang, Jinsheng
2013-01-01
There is accumulating evidence of condition-dependent mate choice in many species, that is, individual preferences varying in strength according to the condition of the chooser. In humans, for example, people with more attractive faces/bodies, and who are higher in sociosexuality, exhibit stronger preferences for attractive traits in opposite-sex faces/bodies. However, previous studies have tended to use only relatively simple, isolated measures of rater attractiveness. Here we use 3D body scanning technology to examine associations between strength of rater preferences for attractive traits in opposite-sex bodies, and raters' body shape, self-perceived attractiveness, and sociosexuality. For 118 raters and 80 stimuli models, we used a 3D scanner to extract body measurements associated with attractiveness (male waist-chest ratio [WCR], female waist-hip ratio [WHR], and volume-height index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality. As expected, WHR and VHI were important predictors of female body attractiveness, while WCR and VHI were important predictors of male body attractiveness. Results indicated that male rater sociosexuality scores were positively associated with strength of preference for attractive (low) VHI and attractive (low) WHR in female bodies. Moreover, male rater self-perceived attractiveness was positively associated with strength of preference for low VHI in female bodies. The only evidence of condition-dependent preferences in females was a positive association between attractive VHI in female raters and preferences for attractive (low) WCR in male bodies. No other significant associations were observed in either sex between aspects of rater body shape and strength of preferences for attractive opposite-sex body traits. These results suggest that among male raters, rater self-perceived attractiveness and sociosexuality are important predictors of preference strength for attractive opposite
Reconsideration of measurement of error in human motor learning.
Crabtree, D A; Antrim, L R
1988-10-01
Human motor learning is often measured by error scores. The convention of using mean absolute error, mean constant error, and variable error shows lack of desirable parsimony and interpretability. This paper provides the background of error measurement and states criticisms of conventional methodology. A parsimonious model of error analysis is provided, along with operationalized interpretations and implications for motor learning. Teaching, interpreting, and using error scores in research may be simplified and facilitated with the model.
Errors Associated With Measurements from Imaging Probes
NASA Astrophysics Data System (ADS)
Heymsfield, A.; Bansemer, A.
2015-12-01
Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.
Laser measurement and analysis of reposition error in polishing systems
NASA Astrophysics Data System (ADS)
Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying
2015-10-01
In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.
A Comparison of Assessment Methods and Raters in Product Creativity
ERIC Educational Resources Information Center
Lu, Chia-Chen; Luh, Ding-Bang
2012-01-01
Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…
Kreiter, Clarence D.; Wilson, Adam B.; Humbert, Aloysius J.; Wade, Patricia A.
2016-01-01
Background When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. Method During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. Results The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. Conclusions Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptor's ratings be used to calculate the student's overall mean performance score. PMID:26925540
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Rater Effects: Ego Engagement in Rater Decision-Making
ERIC Educational Resources Information Center
Wiseman, Cynthia S.
2012-01-01
The decision-making behaviors of 8 raters when scoring 39 persuasive and 39 narrative essays written by second language learners were examined, first using Rasch analysis and then, through think aloud protocols. Results based on Rasch analysis and think aloud protocols recorded by raters as they were scoring holistically and analytically suggested…
ERIC Educational Resources Information Center
Gyagenda, Ismail S.; Engelhard, George, Jr.
The purpose of this study was to describe the Rasch model for measurement and apply the model to examine the relationship between raters, domains of written compositions, and student writing ability. Twenty raters were randomly selected from a group of 87 operational raters contracted to rate essays as part of the 1993 field test of the Georgia…
Rapid mapping of volumetric machine errors using distance measurements
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.
CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.
2004-07-05
The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
ERIC Educational Resources Information Center
Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija
2012-01-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…
Thinking Scientifically: Understanding Measurement and Errors
ERIC Educational Resources Information Center
Alagumalai, Sivakumar
2015-01-01
Thinking scientifically consists of systematic observation, experiment, measurement, and the testing and modification of research questions. In effect, science is about measurement and the understanding of causation. Measurement is an integral part of science and engineering, and has pertinent implications for the human sciences. No measurement is…
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
The impact of covariate measurement error on risk prediction.
Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna
2015-07-10
In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses' Health Study. PMID:25865315
Using neural nets to measure ocular refractive errors: a proposal
NASA Astrophysics Data System (ADS)
Netto, Antonio V.; Ferreira de Oliveira, Maria C.
2002-12-01
We propose the development of a functional system for diagnosing and measuring ocular refractive errors in the human eye (astigmatism, hypermetropia and myopia) by automatically analyzing images of the human ocular globe acquired with the Hartmann-Schack (HS) technique. HS images are to be input into a system capable of recognizing the presence of a refractive error and outputting a measure of such an error. The system should pre-process and image supplied by the acquisition technique and then use artificial neural networks combined with fuzzy logic to extract the necessary information and output an automated diagnosis of the refractive errors that may be present in the ocular globe under exam.
Does a Rater's Professional Background Influence Communication Skills Assessment?
Artemiou, Elpida; Hecker, Kent G; Adams, Cindy L; Coe, Jason B
2015-01-01
There is increasing pressure in veterinary education to teach and assess communication skills, with the Objective Structured Clinical Examination (OSCE) being the most common assessment method. Previous research reveals that raters are a large source of variance in OSCEs. This study focused on examining the effect of raters' professional background as a source of variance when assessing students' communication skills. Twenty-three raters were categorized according to their professional background: clinical sciences (n=11), basic sciences (n=4), clinical communication (n=5), or hospital administrator/clinical skills technicians (n=3). Raters from each professional background were assigned to the same station and assessed the same students during two four-station OSCEs. Students were in year 2 of their pre-clinical program. Repeated-measures ANOVA results showed that OSCE scores awarded by the rater groups differed significantly: (F(matched_station_1) [2,91]=6.97, p=.002), (F(matched_station_2) [3,90]=13.95, p=.001), (F(matched_station_3) [3,90]=8.76, p=.001), and ((Fmatched_station_4) [2,91]=30.60, p=.001). A significant time effect between the two OSCEs was calculated for matched stations 1, 2, and 4, indicating improved student performances. Raters with a clinical communication skills background assigned scores that were significantly lower compared to the other rater groups. Analysis of written feedback provided by the clinical sciences raters showed that they were influenced by the students' clinical knowledge of the case and that they did not rely solely on the communication checklist items. This study shows that it is important to consider rater background both in recruitment and training programs for communication skills' assessment.
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Unit of Measurement Used and Parent Medication Dosing Errors
Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.
2014-01-01
BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Non-Gaussian Error Distributions of LMC Distance Moduli Measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Ratra, Bharat
2015-12-01
We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.
Virtual Raters for Reproducible and Objective Assessments in Radiology.
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A; Bendszus, Martin; Biller, Armin
2016-04-27
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.
Virtual Raters for Reproducible and Objective Assessments in Radiology.
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A; Bendszus, Martin; Biller, Armin
2016-01-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics. PMID:27118379
Virtual Raters for Reproducible and Objective Assessments in Radiology
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-01-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics. PMID:27118379
Body Shape Preferences: Associations with Rater Body Shape and Sociosexuality
Price, Michael E.; Pound, Nicholas; Dunn, James; Hopkins, Sian; Kang, Jinsheng
2013-01-01
There is accumulating evidence of condition-dependent mate choice in many species, that is, individual preferences varying in strength according to the condition of the chooser. In humans, for example, people with more attractive faces/bodies, and who are higher in sociosexuality, exhibit stronger preferences for attractive traits in opposite-sex faces/bodies. However, previous studies have tended to use only relatively simple, isolated measures of rater attractiveness. Here we use 3D body scanning technology to examine associations between strength of rater preferences for attractive traits in opposite-sex bodies, and raters’ body shape, self-perceived attractiveness, and sociosexuality. For 118 raters and 80 stimuli models, we used a 3D scanner to extract body measurements associated with attractiveness (male waist-chest ratio [WCR], female waist-hip ratio [WHR], and volume-height index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality. As expected, WHR and VHI were important predictors of female body attractiveness, while WCR and VHI were important predictors of male body attractiveness. Results indicated that male rater sociosexuality scores were positively associated with strength of preference for attractive (low) VHI and attractive (low) WHR in female bodies. Moreover, male rater self-perceived attractiveness was positively associated with strength of preference for low VHI in female bodies. The only evidence of condition-dependent preferences in females was a positive association between attractive VHI in female raters and preferences for attractive (low) WCR in male bodies. No other significant associations were observed in either sex between aspects of rater body shape and strength of preferences for attractive opposite-sex body traits. These results suggest that among male raters, rater self-perceived attractiveness and sociosexuality are important predictors of preference strength for attractive opposite
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Error tolerance of topological codes with independent bit-flip and measurement errors
NASA Astrophysics Data System (ADS)
Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.
2016-07-01
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Aerial measurement error with a dot planimeter: Some experimental estimates
NASA Technical Reports Server (NTRS)
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior
Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor
2015-01-01
Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.
Errors Associated with the Direct Measurement of Radionuclides in Wounds
Hickman, D P
2006-03-02
Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and
Filter induced errors in laser anemometer measurements using counter processors
NASA Technical Reports Server (NTRS)
Oberle, L. G.; Seasholtz, R. G.
1985-01-01
Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy
Gil-Pita, Roberto
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Shallow Water Geodesy: Measurements Errors During Seabed Determination
NASA Astrophysics Data System (ADS)
Makar, A.
Precision determination of the seabed is important during mining of the mineral re- sources and dredging the seabed. Hydrographic measurements are the dynamic pro- cess of determination the position and the depth. There are many errors during mea- surements, which are connected with: moving the ship, vertical distribution of the sound speed, instrumentation errors of the echosounder. Using the high precision posi- tioning system does not assure high precision determination of the seabed. There have been shown and have been characterized causes and elimination methods of seabed determination errors.
Inter-rater reliability of select physical examination procedures in patients with neck pain.
Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A
2014-07-01
This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. PMID:27566773
Beam induced vacuum measurement error in BEPC II
NASA Astrophysics Data System (ADS)
Huang, Tao; Xiao, Qiong; Peng, XiaoHua; Wang, HaiJing
2011-12-01
When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions.
Examination of Rater Training Effect and Rater Eligibility in L2 Performance Assessment
ERIC Educational Resources Information Center
Kondo, Yusuke
2010-01-01
The purposes of this study were to investigate the effects of rater training in an L2 performance assessment and to examine the eligibility of L2 users of English as raters in L2 performance assessment. Rater training was conducted in order for raters to clearly understand the criteria, the evaluation items, and the evaluation procedure. In this…
Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements
Majewski, M.S.
1997-01-01
Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.
How Do Raters Judge Spoken Vocabulary?
ERIC Educational Resources Information Center
Li, Hui
2016-01-01
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
Poulos, Natalie S.; Pasch, Keryn E.
2015-01-01
Few studies of the food environment have collected primary data, and even fewer have reported reliability of the tool used. This study focused on the development of an innovative electronic data collection tool used to document outdoor food and beverage (FB) advertising and establishments near 43 middle and high schools in the Outdoor MEDIA Study. Tool development used GIS based mapping, an electronic data collection form on handheld devices, and an easily adaptable interface to efficiently collect primary data within the food environment. For the reliability study, two teams of data collectors documented all FB advertising and establishments within one half-mile of six middle schools. Inter-rater reliability was calculated overall and by advertisement or establishment category using percent agreement. A total of 824 advertisements (n=233), establishment advertisements (n=499), and establishments (n=92) were documented (range=8–229 per school). Overall inter-rater reliability of the developed tool ranged from 69–89% for advertisements and establishments. Results suggest that the developed tool is highly reliable and effective for documenting the outdoor FB environment. PMID:26022774
Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware
NASA Technical Reports Server (NTRS)
Winnitoy, Susan
2012-01-01
measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.
Non-Gaussian error distribution of 7Li abundance measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Houston, Stephen; Ratra, Bharat
2015-07-01
We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.
Optimal measurement strategies for effective suppression of drift errors
Yashchuk, Valeriy V.
2009-04-16
Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.
Rater Wealth Predicts Perceptions of Outgroup Competence.
Chan, Wayne; McCrae, Robert R; Rogers, Darrin L; Weimer, Amy A; Greenberg, David M; Terracciano, Antonio
2011-12-01
National income has a pervasive influence on the perception of ingroup stereotypes, with high status and wealthy targets perceived as more competent. In two studies we investigated the degree to which economic wealth of raters related to perceptions of outgroup competence. Raters' economic wealth predicted trait ratings when 1) raters in 48 other cultures rated Americans' competence and 2) Mexican Americans rated Anglo Americans' competence. Rater wealth also predicted ratings of interpersonal warmth on the culture level. In conclusion, raters' economic wealth, either nationally or individually, is significantly associated with perception of outgroup members, supporting the notion that ingroup conditions or stereotypes function as frames of reference in evaluating outgroup traits.
The effect of measurement error on surveillance metrics
Weaver, Brian Phillip; Hamada, Michael S.
2012-04-24
The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Modified McLeod pressure gage eliminates measurement errors
NASA Technical Reports Server (NTRS)
Kells, M. C.
1966-01-01
Modification of a McLeod gage eliminates errors in measuring absolute pressure of gases in the vacuum range. A valve which is internal to the gage and is magnetically actuated is positioned between the mercury reservoir and the sample gas chamber.
Bayesian conformity assessment in presence of systematic measurement errors
NASA Astrophysics Data System (ADS)
Carobbi, Carlo; Pennecchi, Francesca
2016-04-01
Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Inter-tester Agreement in Refractive Error Measurements
Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang
2014-01-01
Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter
Error Correction for Foot Clearance in Real-Time Measurement
NASA Astrophysics Data System (ADS)
Wahab, Y.; Bakar, N. A.; Mazalan, M.
2014-04-01
Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.
Automatic diagnostic system for measuring ocular refractive errors
NASA Astrophysics Data System (ADS)
Ventura, Liliane; Chiaradia, Caio; de Sousa, Sidney J. F.; de Castro, Jarbas C.
1996-05-01
Ocular refractive errors (myopia, hyperopia and astigmatism) are automatic and objectively determined by projecting a light target onto the retina using an infra-red (850 nm) diode laser. The light vergence which emerges from the eye (light scattered from the retina) is evaluated in order to determine the corresponding ametropia. The system basically consists of projecting a target (ring) onto the retina and analyzing the scattered light with a CCD camera. The light scattered by the eye is divided into six portions (3 meridians) by using a mask and a set of six prisms. The distance between the two images provided by each of the meridians, leads to the refractive error of the referred meridian. Hence, it is possible to determine the refractive error at three different meridians, which gives the exact solution for the eye's refractive error (spherical and cylindrical components and the axis of the astigmatism). The computational basis used for the image analysis is a heuristic search, which provides satisfactory calculation times for our purposes. The peculiar shape of the target, a ring, provides a wider range of measurement and also saves parts of the retina from unnecessary laser irradiation. Measurements were done in artificial and in vivo eyes (using cicloplegics) and the results were in good agreement with the retinoscopic measurements.
Fairus, Fariza Zainudin; Joseph, Leonard Henry; Omar, Baharudin; Ahmad, Johan; Sulaiman, Riza
2016-01-01
Background The understanding of vertical ground reaction force (VGRF) during walking and half-squatting is necessary and commonly utilised during the rehabilitation period. The purpose of this study was to establish measurement reproducibility of VGRF that reports the minimal detectable changes (MDC) during walking and half-squatting activity among healthy male adults. Methods 14 male adults of average age, 24.88 (5.24) years old, were enlisted in this study. The VGRF was assessed using the force plates which were embedded into a customised walking platform. Participants were required to carry out three trials of gait and half-squat. Each participant completed the two measurements within a day, approximately four hours apart. Results Measurements of VGRF between sessions presented an excellent VGRF data for walking (ICC Left = 0.88, ICC Right = 0.89). High reliability of VGRF was also noted during the half-squat activity (ICC Left = 0.95, ICC Right = 0.90). The standard errors of measurement (SEM) of VGRF during the walking and half-squat activity are less than 8.35 Nm/kg and 4.67 Nm/kg for the gait and half-squat task respectively. Conclusion The equipment set-up and measurement procedure used to quantify VGRF during walking and half-squatting among healthy males displayed excellent reliability. Researcher should consider using this method to measure the VGRF during functional performance assessment. PMID:27547111
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Reducing Errors by Use of Redundancy in Gravity Measurements
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.
Gómez-Cabello, Alba; Vicente-Rodríguez, Germán; Albers, Ulrike; Mata, Esmeralda; Rodriguez-Marroyo, Jose A.; Olivares, Pedro R.; Gusi, Narcis; Villa, Gerardo; Aznar, Susana; Gonzalez-Gross, Marcela; Casajús, Jose A.; Ara, Ignacio
2012-01-01
Background The elderly EXERNET multi-centre study aims to collect normative anthropometric data for old functionally independent adults living in Spain. Purpose To describe the standardization process and reliability of the anthropometric measurements carried out in the pilot study and during the final workshop, examining both intra- and inter-rater errors for measurements. Materials and Methods A total of 98 elderly from five different regions participated in the intra-rater error assessment, and 10 different seniors living in the city of Toledo (Spain) participated in the inter-rater assessment. We examined both intra- and inter-rater errors for heights and circumferences. Results For height, intra-rater technical errors of measurement (TEMs) were smaller than 0.25 cm. For circumferences and knee height, TEMs were smaller than 1 cm, except for waist circumference in the city of Cáceres. Reliability for heights and circumferences was greater than 98% in all cases. Inter-rater TEMs were 0.61 cm for height, 0.75 cm for knee-height and ranged between 2.70 and 3.09 cm for the circumferences measured. Inter-rater reliabilities for anthropometric measurements were always higher than 90%. Conclusion The harmonization process, including the workshop and pilot study, guarantee the quality of the anthropometric measurements in the elderly EXERNET multi-centre study. High reliability and low TEM may be expected when assessing anthropometry in elderly population. PMID:22860013
A Surgery Oral Examination: Interrater Agreement and the Influence of Rater Characteristics.
ERIC Educational Resources Information Center
Burchard, Kenneth W.; And Others
1995-01-01
A study measured interrater reliability among 140 United States and Canadian surgery exam raters and the influences of age, years in practice, and experience as an examiner on individual scores. Results indicate three aspects of examinee performance influenced scores: verbal style, dress, and content of answers. No rater characteristic…
Error in total ozone measurements arising from aerosol attenuation
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
Geometric error measurement of spiral bevel gears and data processing
NASA Astrophysics Data System (ADS)
Cao, Xue-mei; Cao, Qing-mei; Xu, Hao
2008-12-01
This paper calculates the theoretical tooth surface of spiral bevel gear and, using coordinate measuring machine, inspects the actual tooth surface, which provides an objective and quantitative method for inspecting the tooth surface of spiral bevel gears. For many reasons there are some deviations between the actual tooth surface and the theoretical tooth surface. Based on the differential geometry and space engagement theory, this paper deduces the analytical representation of theoretical tooth surface through the process of gear generation. After comparing the coordinates of the actual gear tooth surface and the theoretical tooth surface, a high-precision analysis graphics of tooth surface errors can be obtained by measuring date processing. A pair of aviation spiral bevel gears manufactured by Phoenix 800PG Grinding machine are inspected by Mahr measurement. The result of comparison of gear surface errors, inspected respectively by the method of this paper and by the Mahr's software, shows the consistency of the error distribution. The experiment verifies the validity and feasibility of the method presented in this paper.
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Error reduction techniques for measuring long synchrotron mirrors
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
Consensus recommendations on rater training and certification.
West, Mark D; Daniel, David G; Opler, Mark; Wise-Rankovic, Alexandria; Kalali, Amir
2014-01-01
There is currently no accepted standard for the clinical research industry to follow when selecting and training raters to administer rating scales in clinical neuroscience trials. This article offers guidelines, based on expert recommendations of the CNS Summit Rater Training and Certification Committee, for selecting, training, and evaluating raters. The article also defines terminology and offers recommendations for considering raters with prior training and certification. These guidelines are intended for investigators, pharmaceutical companies, contract research organizations, and other entities involved in clinical neuroscience trials.
Improving optical bench radius measurements using stage error motion data
Schmitz, Tony L.; Gardner, Neil; Vaughn, Matthew; Medicus, Kate; Davies, Angela
2008-12-20
We describe the application of a vector-based radius approach to optical bench radius measurements in the presence of imperfect stage motions. In this approach, the radius is defined using a vector equation and homogeneous transformation matrix formulism. This is in contrast to the typical technique, where the displacement between the confocal and cat's eye null positions alone is used to determine the test optic radius. An important aspect of the vector-based radius definition is the intrinsic correction for measurement biases, such as straightness errors in the stage motion and cosine misalignment between the stage and displacement gauge axis, which lead to an artificially small radius value if the traditional approach is employed. Measurement techniques and results are provided for the stage error motions, which are then combined with the setup geometry through the analysis to determine the radius of curvature for a spherical artifact. Comparisons are shown between the new vector-based radius calculation, traditional radius computation, and a low uncertainty mechanical measurement. Additionally, the measurement uncertainty for the vector-based approach is determined using Monte Carlo simulation and compared to experimental results.
Rating Written Performance: What Do Raters Do and Why?
ERIC Educational Resources Information Center
Kuiken, Folkert; Vedder, Ineke
2014-01-01
This study investigates the relationship in L2 writing between raters' judgments of communicative adequacy and linguistic complexity by means of six-point Likert scales, and general measures of linguistic performance. The participants were 39 learners of Italian and 32 of Dutch, who wrote two short argumentative essays. The same writing tasks…
Sedrez, Juliana A.; Candotti, Cláudia T.; Rosa, Maria I. Z.; Medeiros, Fernanda S.; Marques, Mariana T.; Loss, Jefferson F.
2016-01-01
Introduction: The early evaluation of the spine in children is desirable because it is at this stage of development that the greatest changes in the body structures occur. Objective: To determine the test-retest, intra- and inter-rater reliability of the Flexicurve instrument for the evaluation of spinal curvatures in children. Method: Forty children ranging from 5 to 15 years of age were evaluated by two independent evaluators using the Flexicurve to model the spine. The agreement was evaluated using Intraclass Correlation Coefficients (ICC), Standard Error of the Measurement (SEM), and Minimal Detectable Change (MDC). Results: In relation to thoracic kyphosis, the Flexicurve was shown to have excellent correlation in terms of test-retest reliability (ICC2,2=0.87) and moderate correlation in terms of intra-(ICC2,2=0.68) and inter-rater reliability (ICC2,2=0.72). In relation to lumbar lordosis, it was shown to have moderate correlation in terms of test-retest reliability (ICC2,2=0.66) and intra- (ICC2,2=0.50) and inter-rater reliability (ICC=0.56). Conclusion: This evaluation of the reliability of the Flexicurve allows its use in school screening. However, to monitor spinal curvatures in the sagittal plane in children, complementary clinical measures are necessary. Further studies are required to investigate the concurrent validity of the instrument in order to identify its diagnostic capacity. PMID:26786078
Variance Estimation of Nominal-Scale Inter-Rater Reliability with Random Selection of Raters
ERIC Educational Resources Information Center
Gwet, Kilem Li
2008-01-01
Most inter-rater reliability studies using nominal scales suggest the existence of two populations of inference: the population of subjects (collection of objects or persons to be rated) and that of raters. Consequently, the sampling variance of the inter-rater reliability coefficient can be seen as a result of the combined effect of the sampling…
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Data Reconciliation and Gross Error Detection: A Filtered Measurement Test
Himour, Y.
2008-06-12
Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.
On the Measurement Errors of the Joss-Waldvogel Disdrometer
NASA Technical Reports Server (NTRS)
Tokay, Ali; Wolff, K. R.; Bashor, Paul; Dursun, O. K.
2003-01-01
The Joss-Waldvogel (JW) disdrometer is considered to be a reference instrument for drop size distribution measurements. It has been widely used in many field campaigns as part of validation efforts of radar rainfall estimation. It has also been incorporated in radar rain gauge rainfall observation networks at several ground validation sites for NASA s Tropical Rainfall Measuring Mission (TRMM). It is anticipated that the Joss-Waldvogel disdrometer will be one of the key instruments for ground validation for the upcoming Global Precipitation Measurement (GPM) mission. The JW is an impact type disdrometer and has several shortcomings. One such shortcoming is that it underestimates the number of small drops in heavy rain due to the disdrometer dead time. The detection of smaller drops is also suppressed in the presence of background noise. Further, drops larger than 5.0 to 5.5 mm diameter cannot be distinguished by the disdrometer. The JW assumes that all raindrops fall at their terminal fall speed. Ignoring the influence of vertical air motion on raindrop fall speed results in errors in determining the raindrop size. Also, the bulk descriptors of rainfall that requires the fall speed of the drops will be overestimated or underestimated due to errors in measured size and assumed fall velocity. Long-term observations from a two-dimensional video disdrometer are employed to simulate the JW disdrometer and assess how it s shortcomings affect radar rainfall estimation. Data collected from collocated JW disdrometers were also incorporated in this study.
Paulsen, Robert; Gallu, Tommaso; Gilkey, David; Reiser, Raoul; Murgia, Lelia; Rosecrance, John
2015-11-01
The purpose of this study was to characterize the inter-rater reliability of two physical exposure assessment methods of the upper extremity, the Strain Index (SI) and Occupational Repetitive Actions (OCRA) Checklist. These methods are commonly used in occupational health studies and by occupational health practitioners. Seven raters used the SI and OCRA Checklist to assess task-level physical exposures to the upper extremity of workers performing 21 cheese manufacturing tasks. Inter-rater reliability was characterized using a single-measure, agreement-based intraclass correlation coefficient (ICC). Inter-rater reliability of SI assessments was moderate to good (ICC = 0.59, 95% CI: 0.45-0.73), a similar finding to prior studies. Inter-rater reliability of OCRA Checklist assessments was excellent (ICC = 0.80, 95% CI: 0.70-0.89). Task complexity had a small, but non-significant, effect on inter-rater reliability SI and OCRA Checklist scores. Both the SI and OCRA Checklist assessments possess adequate inter-rater reliability for the purposes of occupational health research and practice. The OCRA Checklist inter-rater reliability scores were among the highest reported in the literature for semi-quantitative physical exposure assessment tools of the upper extremity. The OCRA Checklist however, required more training time and time to conduct the risk assessments compared to the SI. PMID:26154218
Paulsen, Robert; Gallu, Tommaso; Gilkey, David; Reiser, Raoul; Murgia, Lelia; Rosecrance, John
2015-11-01
The purpose of this study was to characterize the inter-rater reliability of two physical exposure assessment methods of the upper extremity, the Strain Index (SI) and Occupational Repetitive Actions (OCRA) Checklist. These methods are commonly used in occupational health studies and by occupational health practitioners. Seven raters used the SI and OCRA Checklist to assess task-level physical exposures to the upper extremity of workers performing 21 cheese manufacturing tasks. Inter-rater reliability was characterized using a single-measure, agreement-based intraclass correlation coefficient (ICC). Inter-rater reliability of SI assessments was moderate to good (ICC = 0.59, 95% CI: 0.45-0.73), a similar finding to prior studies. Inter-rater reliability of OCRA Checklist assessments was excellent (ICC = 0.80, 95% CI: 0.70-0.89). Task complexity had a small, but non-significant, effect on inter-rater reliability SI and OCRA Checklist scores. Both the SI and OCRA Checklist assessments possess adequate inter-rater reliability for the purposes of occupational health research and practice. The OCRA Checklist inter-rater reliability scores were among the highest reported in the literature for semi-quantitative physical exposure assessment tools of the upper extremity. The OCRA Checklist however, required more training time and time to conduct the risk assessments compared to the SI.
Exploring the role of first impressions in rater-based assessments.
Wood, Timothy J
2014-08-01
Medical education relies heavily on assessment formats that require raters to assess the competence and skills of learners. Unfortunately, there are often inconsistencies and variability in the scores raters assign. To ensure the scores from these assessment tools have validity, it is important to understand the underlying cognitive processes that raters use when judging the abilities of their learners. The goal of this paper, therefore, is to contribute to a better understanding of the cognitive processes used by raters. Representative findings from the social judgment and decision making, cognitive psychology, and educational measurement literature will be used to enlighten the underpinnings of these rater-based assessments. Of particular interest is the impact judgments referred to as first impressions (or thin slices) have on rater-based assessments. These are judgments about people made very quickly and based on very little information. A narrative review will provide a synthesis of research in these three literatures (social judgment and decision making, educational psychology, and cognitive psychology) and will focus on the underlying cognitive processes, the accuracy and the impact of first impressions on rater-based assessments. The application of these findings to the types of rater-based assessments used in medical education will then be reviewed. Gaps in understanding will be identified and suggested directions for future research studies will be discussed.
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.
Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables
Song, Xiao; Wang, Ching-Yun
2014-01-01
In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed. PMID:25663724
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following:
This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Examples of Detecting Measurement Errors with the QCRad VAP
Shi, Yan; Long, Charles N.
2005-07-30
The QCRad VAP is being developed to assess the data quality for the ARM radiation data collected at the Extended and ARCS facilities. In this study, we processed one year of radiation data, chosen at random, for each of the twenty SGP Extended Facilities to aid in determining the user configurable limits for the SGP sites. By examining yearly summary plots of the radiation data and the various test limits, we can show that the QCRad VAP is effective in identifying and detecting many different types of measurement errors. Examples of the analysis results will be shown in this poster presentation.
Examiner error in curriculum-based measurement of oral reading.
Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K
2014-08-01
Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research. PMID:25107409
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
Temperature-measurement errors with capsule-type resistance thermometers
NASA Astrophysics Data System (ADS)
Gaiser, C.; Fellmuth, B.
2013-09-01
Inspired by a lot of discussions within the temperature-measurement community on unresolved discrepancies occurring in conjunction with the application of capsule-type resistance thermometers, PTB has performed a detailed theoretical and experimental treatment of this problem. The focus of this work lies on the investigation of errors caused by the heat conduction via the measuring electrical leads that causes a temperature difference between the sensor element and the body, the temperature of which has to be measured. In analogy to electrical networks, a model connecting thermal resistances and heat flows has been established to describe the thermal conditions within the thermometer. The model leads to the definition of new thermometer parameters, called thermal resistance and reduction factor, that have to be determined either by dedicated experiments or theoretical simulations.
Plain film measurement error in acute displaced midshaft clavicle fractures
Archer, Lori Anne; Hunt, Stephen; Squire, Daniel; Moores, Carl; Stone, Craig; O’Dea, Frank; Furey, Andrew
2016-01-01
Background Clavicle fractures are common and optimal treatment remains controversial. Recent literature suggests operative fixation of acute displaced mid-shaft clavicle fractures (DMCFs) shortened more than 2 cm improves outcomes. We aimed to identify correlation between plain film and computed tomography (CT) measurement of displacement and the inter- and intraobserver reliability of repeated radiographic measurements. Methods We obtained radiographs and CT scans of patients with acute DMCFs. Three orthopedic staff and 3 residents measured radiographic displacement at time zero and 2 weeks later. The CT measurements identified absolute shortening in 3 dimensions (by subtracting the length of the fractured from the intact clavicle). We then compared shortening measured on radiographs and shortening measured in 3 dimensions on CT. Interobserver and intraobserver reliability were calculated. Results We reviewed the fractures of 22 patients. Bland–Altman repeatability coefficient calculations indicated that radiograph and CT measurements of shortening could not be correlated owing to an unacceptable amount of measurement error (6 cm). Interobserver reliability for plain radiograph measurements was excellent (Cronbach α = 0.90). Likewise, intraobserver reliabilities for plain radiograph measurements as calculated with paired t tests indicated excellent correlation (p > 0.05 in all but 1 observer [p = 0.04]). Conclusion To establish shortening as an indication for DMCF fixation, reliable measurement tools are required. The low correlation between plain film and CT measurements we observed suggests further research is necessary to establish what imaging modality reliably predicts shortening. Our results indicate weak correlation between radiograph and CT measurement of acute DMCF shortening. PMID:27438054
Agreement between Two Independent Groups of Raters
ERIC Educational Resources Information Center
Vanbelle, Sophie; Albert, Adelin
2009-01-01
We propose a coefficient of agreement to assess the degree of concordance between two independent groups of raters classifying items on a nominal scale. This coefficient, defined on a population-based model, extends the classical Cohen's kappa coefficient for quantifying agreement between two raters. Weighted and intraclass versions of the…
Rater Variables Associated with ITER Ratings
ERIC Educational Resources Information Center
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-01-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of…
Effects of measurement error on estimating biological half-life
Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. )
1992-10-01
Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
Optical refractive synchronization: bit error rate analysis and measurement
NASA Astrophysics Data System (ADS)
Palmer, James R.
1999-11-01
The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
Introducing a new definition of a near fall: intra-rater and inter-rater reliability.
Maidan, I; Freedman, T; Tzemah, R; Giladi, N; Mirelman, A; Hausdorff, J M
2014-01-01
Near falls (NFs) are more frequent than falls, and may occur before falls, potentially predicting fall risk. As such, identification of a NF is important. We aimed to assess intra and inter-rater reliability of the traditional definition of a NF and to demonstrate the potential utility of a new definition. To this end, 10 older adults, 10 idiopathic elderly fallers, and 10 patients with Parkinson's disease (PD) walked in an obstacle course while wearing a safety harness. All walks were videotaped. Forty-nine video segments were extracted to create 2 clips each of 8.48 min. Four raters scored each event using the traditional definition and, two weeks later, using the new definition. A fifth rater used only the new definition. Intra-rater reliability was determined using Kappa (K) statistics and inter-rater reliability was determined using ICC. Using the traditional definition, three raters had poor intra-rater reliability (K<0.054, p>0.137) and one rater had moderate intra-rater reliability (K=0.624, p<0.001). With the traditional definition, inter-rater reliability between the four raters was moderate (ICC=0.667, p<0.001). In contrast, the new NF definition showed high intra-rater (K>0.601, p<0.001) and excellent inter-rater reliability (ICC=0.815, p<0.001). A priori, it is easy to distinguish falls from usual walking and NFs, but it is more challenging to distinguish NFs from obstacle negotiation and usual walking. Therefore, a more precise definition of NF is required. The results of the present study suggest that the proposed new definition increases intra and inter-rater reliability, a critical step for using NFs to quantify fall risk.
Validation of image segmentation by estimating rater bias and variance.
Warfield, Simon K; Zou, Kelly H; Wells, William M
2006-01-01
The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a "ground truth" or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare to segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labeling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, amongst others, surface, distance transform or level set representations of segmentations, and can be used to assess whether or not a rater consistently over-estimates or under-estimates the position of a boundary. PMID:17354851
Validation of image segmentation by estimating rater bias and variance.
Warfield, Simon K; Zou, Kelly H; Wells, William M
2008-07-13
The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a 'ground truth' or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare with segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically, these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labelling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, among others, surface, distance transform or level-set representations of segmentations, and can be used to assess whether or not a rater consistently overestimates or underestimates the position of a boundary. PMID:18407896
On modeling animal movements using Brownian motion with measurement error.
Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun
2014-02-01
Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Technical Note: Simulation of 4DCT tumor motion measurement errors
Dou, Tai H.; Thomas, David H.; O’Connell, Dylan; Bradley, Jeffrey D.; Lamb, James M.; Low, Daniel A.
2015-01-01
Purpose: To determine if and by how much the commercial 4DCT protocols under- and overestimate tumor breathing motion. Methods: 1D simulations were conducted that modeled a 16-slice CT scanner and tumors moving proportionally to breathing amplitude. External breathing surrogate traces of at least 5-min duration for 50 patients were used. Breathing trace amplitudes were converted to motion by relating the nominal tumor motion to the 90th percentile breathing amplitude, reflecting motion defined by the more recent 5DCT approach. Based on clinical low-pitch helical CT acquisition, the CT detector moved according to its velocity while the tumor moved according to the breathing trace. When the CT scanner overlapped the tumor, the overlapping slices were identified as having imaged the tumor. This process was repeated starting at successive 0.1 s time bin in the breathing trace until there was insufficient breathing trace to complete the simulation. The tumor size was subtracted from the distance between the most superior and inferior tumor positions to determine the measured tumor motion for that specific simulation. The effect of the scanning parameter variation was evaluated using two commercial 4DCT protocols with different pitch values. Because clinical 4DCT scan sessions would yield a single tumor motion displacement measurement for each patient, errors in the tumor motion measurement were considered systematic. The mean of largest 5% and smallest 5% of the measured motions was selected to identify over- and underdetermined motion amplitudes, respectively. The process was repeated for tumor motions of 1–4 cm in 1 cm increments and for tumor sizes of 1–4 cm in 1 cm increments. Results: In the examined patient cohort, simulation using pitch of 0.06 showed that 30% of the patients exhibited a 5% chance of mean breathing amplitude overestimations of 47%, while 30% showed a 5% chance of mean breathing amplitude underestimations of 36%; with a separate simulation
Rater Wealth Predicts Perceptions of Outgroup Competence
Chan, Wayne; McCrae, Robert R.; Rogers, Darrin L.; Weimer, Amy A.; Greenberg, David M.; Terracciano, Antonio
2011-01-01
National income has a pervasive influence on the perception of ingroup stereotypes, with high status and wealthy targets perceived as more competent. In two studies we investigated the degree to which economic wealth of raters related to perceptions of outgroup competence. Raters’ economic wealth predicted trait ratings when 1) raters in 48 other cultures rated Americans’ competence and 2) Mexican Americans rated Anglo Americans’ competence. Rater wealth also predicted ratings of interpersonal warmth on the culture level. In conclusion, raters’ economic wealth, either nationally or individually, is significantly associated with perception of outgroup members, supporting the notion that ingroup conditions or stereotypes function as frames of reference in evaluating outgroup traits. PMID:22379232
Is the Parkinson Anxiety Scale comparable across raters?
Forjaz, Maria João; Ayala, Alba; Martinez-Martin, Pablo; Dujardin, Kathy; Pontone, Gregory M; Starkstein, Sergio E; Weintraub, Daniel; Leentjens, Albert F G
2015-04-01
The Parkinson Anxiety Scale is a new scale developed to measure anxiety severity in Parkinson's disease specifically. It consists of three dimensions: persistent anxiety, episodic anxiety, and avoidance behavior. This study aimed to assess the measurement properties of the scale while controlling for the rater (self- vs. clinician-rated) effect. The Parkinson Anxiety Scale was administered to a cross-sectional multicenter international sample of 362 Parkinson's disease patients. Both patients and clinicians rated the patient's anxiety independently. A many-facet Rasch model design was applied to estimate and remove the rater effect. The following measurement properties were assessed: fit to the Rasch model, unidimensionality, reliability, differential item functioning, item local independency, interrater reliability (self or clinician), and scale targeting. In addition, test-retest stability, construct validity, precision, and diagnostic properties of the Parkinson Anxiety Scale were also analyzed. A good fit to the Rasch model was obtained for Parkinson Anxiety Scale dimensions A and B, after the removal of one item and rescoring of the response scale for certain items, whereas dimension C showed marginal fit. Self versus clinician rating differences were of small magnitude, with patients reporting higher anxiety levels than clinicians. The linear measure for Parkinson Anxiety Scale dimensions A and B showed good convergent construct with other anxiety measures and good diagnostic properties. Parkinson Anxiety Scale modified dimensions A and B provide valid and reliable measures of anxiety in Parkinson's disease that are comparable across raters. Further studies are needed with dimension C.
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Three-way partitioning of sea surface temperature measurement error
NASA Technical Reports Server (NTRS)
Chelton, D.
1983-01-01
Given any set of three 2 degree binned anomaly sea surface temperature (SST) data sets by three different sensors, estimates of the mean square error of each sensor estimate is made. The above formalism performed on every possible triplet of sensors. A separate table of error estimates is then constructed for each sensor.
On the reliability and standard errors of measurement of contrast measures from the D-KEFS.
Crawford, John R; Sutherland, David; Garthwaite, Paul H
2008-11-01
A formula for the reliability of difference scores was used to estimate the reliability of Delis-Kaplan Executive Function System (D-KEFS; Delis et al., 2001) contrast measures from the reliabilities and correlations of their components. In turn these reliabilities were used to calculate standard errors of measurement. The majority of contrast measures had low reliabilities: of the 51 reliability coefficients calculated in the present study, none exceeded 0.7 and hence all failed to meet any of the criteria for acceptable reliability proposed by various experts in psychological measurement. The mean reliability of the contrast scores was 0.27, the median reliability was 0.30. The standard errors of measurement were large and, in many cases, equaled or were only marginally smaller than the contrast scores' standard deviations. The results suggest that, at present, D-KEFS contrast measures should not be used in neuropsychological decision making.
Large-scale spatial angle measurement and the pointing error analysis
NASA Astrophysics Data System (ADS)
Xiao, Wen-jian; Chen, Zhi-bin; Ma, Dong-xi; Zhang, Yong; Liu, Xian-hong; Qin, Meng-ze
2016-05-01
A large-scale spatial angle measurement method is proposed based on inertial reference. Common measurement reference is established in inertial space, and the spatial vector coordinates of each measured axis in inertial space are measured by using autocollimation tracking and inertial measurement technology. According to the spatial coordinates of each test vector axis, the measurement of large-scale spatial angle is easily realized. The pointing error of tracking device based on the two mirrors in the measurement system is studied, and the influence of different installation errors to the pointing error is analyzed. This research can lay a foundation for error allocation, calibration and compensation for the measurement system.
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2014-08-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.
Age Matters, and so May Raters: Rater Differences in the Assessment of Foreign Accents
ERIC Educational Resources Information Center
Huang, Becky H.; Jun, Sun-Ah
2015-01-01
Research on the age of learning effect on second language learners' foreign accents utilizes human judgments to determine speech production outcomes. Inferences drawn from analyses of these ratings are then used to inform theories. The present study focuses on rater differences in the age of learning effect research. Three groups of raters who…
Bogovic, John A.; Jedynak, Bruno; Rigg, Rachel; Du, Annie; Landman, Bennett A.; Prince, Jerry L.; Ying, Sarah H.
2012-01-01
Volumetric measurements obtained from image parcellation have been instrumental in uncovering structure-function relationships. However, anatomical study of the cerebellum is a challenging task. Because of its complex structure, expert human raters have been necessary for reliable and accurate segmentation and parcellation. Such delineations are time-consuming and prohibitively expensive for large studies. Therefore, we present a three-part cerebellar parcellation system that utilizes multiple inexpert human raters that can efficiently and expediently produce results nearly on par with those of experts. This system includes a hierarchical delineation protocol, a rapid verification and evaluation process, and statistical fusion of the inexpert rater parcellations. The quality of the raters’ and fused parcellations was established by examining their Dice similarity coefficient, region of interest (ROI) volumes, and the intraclass correlation coefficient of region volume. The intra-rater ICC was found to be 0.93 at the finest level of parcellation. PMID:22975160
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-05-19
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, W. S.; Burkhart, J. F.; Kylling, A.
2015-08-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2015-12-21
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Space charge enhanced, plasma gradient induced error in satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, D. A.; Hershkowitz, N.; Dekock, J. R.; Intrator, T. P.; Lee, S-G.; Hsieh, M-K.
1994-01-01
In magnetospheric plasmas it is possible for plasma gradients to casue error in electric field measurements made by satellite double probes. The space charge emhanced plasma gradient induced error is discussed in general terms, the results of a laboratory experiment designed to illustrate this error are presented, and a simple expression that quantifies this error in a form that is readily applicable to satellite data is derived. The simple expression indicates that for a given probe bias current there is less error for cylindrical probes than for spherical probes. The expression also suggests that for Viking data the error is negligible.
Total error vs. measurement uncertainty: revolution or evolution?
Oosterhuis, Wytze P; Theodorsson, Elvar
2016-02-01
The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross
Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors
NASA Astrophysics Data System (ADS)
Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping
2016-11-01
The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.
La Haye, R.J.
1997-02-01
The existing theoretical and experimental basis for predicting the levels of resonant static error field at different components m,n that stop plasma rotation and produce a locked mode is reviewed. For ITER ohmic discharges, the slow rotation of the very large plasma is predicted to incur a locked mode (and subsequent disastrous large magnetic islands) at a simultaneous weighted error field ({Sigma}{sub 1}{sup 3}w{sub m1}B{sup 2}{sub rm1}){sup {1/2}}/B{sub T} {ge} 1.9 x 10{sup -5}. Here the weights w{sub m1} are empirically determined from measurements on DIII-D to be w{sub 11} = 0. 2, w{sub 21} = 1.0, and w{sub 31} = 0. 8 and point out the relative importance of different error field components. This could be greatly obviated by application of counter injected neutral beams (which adds fluid flow to the natural ohmic electron drift). The addition of 5 MW of 1 MeV beams at 45{degrees} injection would increase the error field limit by a factor of 5; 13 MW would produce a factor of 10 improvement. Co-injection beams would also be effective but not as much as counter-injection as the co direction opposes the intrinsic rotation while the counter direction adds to it. A means for measuring individual PF and TF coil total axisymmetric field error to less than 1 in 10,000 is described. This would allow alignment of coils to mm accuracy and with correction coils make possible the very low levels of error field needed.
Examining rating scales using Rasch and Mokken models for rater-mediated assessments.
Wind, Stephanie A
2014-01-01
A variety of methods for evaluating the psychometric quality of rater-mediated assessments have been proposed, including rater effects based on latent trait models (e.g., Engelhard, 2013; Wolfe, 2009). Although information about rater effects contributes to the interpretation and use of rater-assigned scores, it is also important to consider ratings in terms of the structure of the rating scale on which scores are assigned. Further, concern with the validity of rater-assigned scores necessitates investigation of these quality control indices within student subgroups, such as gender, language, and race/ethnicity groups. Using a set of guidelines for evaluating the interpretation and use of rating scales adapted from Linacre (1999, 2004), this study demonstrates methods that can be used to examine rating scale functioning within and across student subgroups with indicators from Rasch measurement theory (Rasch, 1960) and Mokken scale analysis (Mokken, 1971). Specifically, this study illustrates indices of rating scale effectiveness based on Rasch models and models adapted from Mokken scaling, and considers whether the two approaches to evaluating the interpretation and use of rating scales lead to comparable conclusions within the context of a large-scale rater-mediated writing assessment. Major findings suggest that indices of rating scale effectiveness based on a parametric and nonparametric approach provide related, but slightly different, information about the structure of rating scales. Implications for research, theory, and practice are discussed. PMID:24950531
Machining Error Compensation Based on 3D Surface Model Modified by Measured Accuracy
NASA Astrophysics Data System (ADS)
Abe, Go; Aritoshi, Masatoshi; Tomita, Tomoki; Shirase, Keiichi
Recently, a demand for precision machining of dies and molds with complex shapes has been increasing. Although CNC machine tools are utilized widely for machining, still machining error compensation is required to meet the increasing demand of machining accuracy. However, the machining error compensation is an operation which takes huge amount of skill, time and cost. This paper deals with a new method of the machining error compensation. The 3D surface data of the machined part is modified according to the machining error measured by CMM (Coordinate Measuring Machine). A compensated NC program is generated from the modified 3D surface data for the machining error compensation.
Exposure measurement error in time-series studies of air pollution: concepts and consequences.
Zeger, S L; Thomas, D; Dominici, F; Samet, J M; Schwartz, J; Dockery, D; Cohen, A
2000-01-01
Misclassification of exposure is a well-recognized inherent limitation of epidemiologic studies of disease and the environment. For many agents of interest, exposures take place over time and in multiple locations; accurately estimating the relevant exposures for an individual participant in epidemiologic studies is often daunting, particularly within the limits set by feasibility, participant burden, and cost. Researchers have taken steps to deal with the consequences of measurement error by limiting the degree of error through a study's design, estimating the degree of error using a nested validation study, and by adjusting for measurement error in statistical analyses. In this paper, we address measurement error in observational studies of air pollution and health. Because measurement error may have substantial implications for interpreting epidemiologic studies on air pollution, particularly the time-series analyses, we developed a systematic conceptual formulation of the problem of measurement error in epidemiologic studies of air pollution and then considered the consequences within this formulation. When possible, we used available relevant data to make simple estimates of measurement error effects. This paper provides an overview of measurement errors in linear regression, distinguishing two extremes of a continuum-Berkson from classical type errors, and the univariate from the multivariate predictor case. We then propose one conceptual framework for the evaluation of measurement errors in the log-linear regression used for time-series studies of particulate air pollution and mortality and identify three main components of error. We present new simple analyses of data on exposures of particulate matter < 10 microm in aerodynamic diameter from the Particle Total Exposure Assessment Methodology Study. Finally, we summarize open questions regarding measurement error and suggest the kind of additional data necessary to address them. Images Figure 1 Figure 2
NASA Astrophysics Data System (ADS)
Potter, Kenneth W.; Walker, John F.
1981-10-01
Above a given threshold an indirect method is usually used to estimate flood discharges. This results in a significant increase in the standard deviation of the measurement error, a phenomenon which the authors have termed discontinuous measurement error. An error model reveals that the coefficients of variation, skewness, and kurtosis of the distribution of the measured flood discharges are significantly higher than the corresponding coefficients of the parent flood distribution. This bias has important implications with regard to flood frequency analysis.
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1980-01-01
Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.
Efron-type measures of prediction error for survival analysis.
Gerds, Thomas A; Schumacher, Martin
2007-12-01
Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548-560) to survival analysis with right-censored event times. We find that flexible rules, like artificial neural nets, classification and regression trees, or regression splines can be assessed, and compared to less flexible rules in the same data where they are developed. The methods are illustrated with data from a breast cancer trial.
[A positioning error measurement method in radiotherapy based on 3D visualization].
An, Ji-Ye; Li, Yue-Xi; Lu, Xu-Dong; Duan, Hui-Long
2007-09-01
The positioning error in radiotherapy is one of the most important factors that influence the location precision of the tumor. Based on the CT-on-rails technology, this paper describes the research on measuring the positioning error in radiotherapy by comparing the planning CT images with the treatment CT images using 3-dimension (3D) methods. It can help doctors to measure positioning errors more accurately than 2D methods. It also supports the powerful 3D interaction such as drag-dropping, rotating and picking-up the object, so that doctors can visualize and measure the positioning errors intuitively.
On the optical measurement of corneal thickness. II. The measuring conditions and sources of error.
Olsen, T; Nielsen, C B; Ehlers, N
1980-12-01
The optical measurement of corneal thickness based on oblique viewing of the optical section of the cornea is complicated by the finite width of the incident slit beam. In this report the theoretical and practical aspects of the effect of the slit width on the thickness reading are analysed. In practice, it was not possible to make slit-width independent thickness readings which were reproducible from one observer to another. In addition, the observed slit-width error was found to vary from one patient to another. The lack of reproducible estimate of the corneal thickness is attributed to difficulties associated with an exact definition of the edges of the visible bands of the optical section, which are determined by biological properties of the cornea as well as perceptive properties of the observer. Although inter-observer errors up to 0.02 mm were found, the intra-observer error amounted to only 0.005-0.006 mm (SD) between consecutive readings. Presumably this high intra-observer reproducibility is the result of the auxiliary pin-lights used. Changes in corneal thickness, measured by the same observer, can therefore be determined with great accuracy.
Systematic errors in cosmic microwave background polarization measurements
NASA Astrophysics Data System (ADS)
O'Dea, Daniel; Challinor, Anthony; Johnson, Bradley R.
2007-04-01
We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Müller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors through to power spectra and cosmological parameters. The method extends previous studies to an arbitrary scan strategy, and eliminates the need for time-consuming Monte Carlo simulations in the early phases of instrument and survey design. We illustrate the method with both simple parametrized forms for the systematics and with beams based on physical-optics simulations. Example results are given in the context of next-generation experiments targeting tensor-to-scalar ratios r ~ 0.01.
Rater variables associated with ITER ratings.
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-10-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.
Direct Behavior Rating: Considerations for Rater Accuracy
ERIC Educational Resources Information Center
Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.
2014-01-01
Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Cognitive Representations in Raters' Assessment of Teacher Portfolios
ERIC Educational Resources Information Center
van der Schaaf, Marieke; Stokking, Karel; Verloop, Nico
2005-01-01
Portfolios are frequently used to assess teachers' competences. In portfolio assessment, the issue of rater reliability is a notorious problem. To improve the quality of assessments insight into raters' judgment processes is crucial. Using a mixed quantitative and qualitative approach we studied cognitive processes underlying raters' judgments and…
Automated Essay Scoring With e-rater[R] V.2
ERIC Educational Resources Information Center
Attali, Yigal; Burstein, Jill
2006-01-01
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Detecting and Correcting for Rater Effects in Performance Assessment.
ERIC Educational Resources Information Center
Raymond, Mark R.; Houston, Walter M.
Performance rating systems frequently use multiple raters in order to improve the reliability of ratings. However, unless all candidates are rated by the same raters, some candidates will be at an unfair advantage or disadvantage solely because they were rated by more stringent or lenient raters. To obtain fair and accurate evaluations of…
An Investigation of Rater Cognition in the Assessment of Projects
ERIC Educational Resources Information Center
Crisp, Victoria
2012-01-01
In the United Kingdom, the majority of national assessments involve human raters. The processes by which raters determine the scores to award are central to the assessment process and affect the extent to which valid inferences can be made from assessment outcomes. Thus, understanding rater cognition has become a growing area of research in the…
Resampling probability values for weighted kappa with multiple raters.
Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E
2008-04-01
A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.
Rater Cognition Research: Some Possible Directions for the Future
ERIC Educational Resources Information Center
Myford, Carol M.
2012-01-01
Over the last several decades, researchers have studied many and varied aspects of rater cognition. Those interested in pursuing basic research have focused on gaining an understanding of raters' thought processes as they score different types of performances and products, striving to understand how raters' mental representations and the cognitive…
Training the Raters: A Key to Effective Performance Appraisal.
ERIC Educational Resources Information Center
Martin, David C.; Bartol, Kathryn M.
1986-01-01
Although appropriate rater behaviors are critical to the success of any performance appraisal system, raters frequently receive little or no training regarding how to carry out their role successfully. This article outlines the major elements that should be included in an effective rater training program. Suggested training approaches and the need…
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1979-01-01
The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.
Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports
ERIC Educational Resources Information Center
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary
2014-01-01
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
Detecting bit-flip errors in a logical qubit using stabilizer measurements.
Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L
2015-04-29
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.
Detecting bit-flip errors in a logical qubit using stabilizer measurements.
Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L
2015-01-01
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894
A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation
NASA Astrophysics Data System (ADS)
Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei
2015-10-01
In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.
The Future of Sociological Research: Measurement Errors and Their Implications.
ERIC Educational Resources Information Center
Blalock, H. M.
The report deals with the relationship between measurement and data analysis procedures in sociological research. The author finds that too many measured variables exist in both theory and measurement assumptions. Since these procedures are interrelated, improvements in either or both areas are necessary. Presented are three sections: (1) specific…
Working with Error and Uncertainty to Increase Measurement Validity
ERIC Educational Resources Information Center
Amrein-Beardsley, Audrey; Barnett, Joshua H.
2012-01-01
Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…
Compensation method for the alignment angle error in pitch deviation measurement
NASA Astrophysics Data System (ADS)
Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei
2016-05-01
When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
State-independent error-disturbance trade-off for measurement operators
NASA Astrophysics Data System (ADS)
Zhou, S. S.; Wu, Shengjun; Chau, H. F.
2016-05-01
In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions - one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
Error analysis and compensation of binocular-stereo-vision measurement system
NASA Astrophysics Data System (ADS)
Zhang, Tao; Guo, Junjie
2008-09-01
Measurement errors in binocular stereo vision are analyzed. It is proved that multi-stage calibration can efficiently reduce systematic errors due to depth of field. Furthermore, for difficulty in carry-out of multi-stage calibration, the compensation methods of errors are presented in this paper. First, using standard plane template, system calibration is completed. Then, moving the cameras to different depths, multiple views are taken and 3d coordinates of special points on template are calculated. Finally, error compensation model in depth is established with least square fitting. Experiment based on CMM indicates the relative error of measurement is reduced by 5.1% with the proposed method in this paper. This is of practical value in expanding measurement range in depth and improving measurement accuracy.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
NASA Astrophysics Data System (ADS)
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea
NASA Astrophysics Data System (ADS)
Shin, S.; Kim, Y.; Jung, C.
2010-12-01
The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta
Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements
NASA Astrophysics Data System (ADS)
Deeg, H. J.
2015-06-01
Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.
The effect of proficiency level on measurement error of range of motion
Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari
2016-01-01
[Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712
Mishra, Vipanchi; Roch, Sylvia G
2013-01-01
Much of the prior research investigating the influence of cultural values on performance ratings has focused either on conducting cross-national comparisons among raters or using cultural level individualism/collectivism scales to measure the effects of cultural values on performance ratings. Recent research has shown that there is considerable within country variation in cultural values, i.e. people in one country can be more individualistic or collectivistic in nature. Taking the latter perspective, the present study used Markus and Kitayama's (1991) conceptualization of independent and interdependent self-construals as measures of individual variations in cultural values to investigate within culture variations in performance ratings. Results suggest that rater self-construal has a significant influence on overall performance evaluations; specifically, raters with a highly interdependent self-construal tend to show a preference for interdependent ratees, whereas raters high on independent self-construal do not show a preference for specific type of ratees when making overall performance evaluations. Although rater self-construal significantly influenced overall performance evaluations, no such effects were observed for specific dimension ratings. Implications of these results for performance appraisal research and practice are discussed. PMID:23885636
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
Mints, M.Ya.; Chinkov, V.N.
1995-09-01
Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.
Ambient Temperature Changes and the Impact to Time Measurement Error
NASA Astrophysics Data System (ADS)
Ogrizovic, V.; Gucevic, J.; Delcev, S.
2012-12-01
Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.
Beckstead, Jason W
2013-10-01
This is the second in a short series of papers on measurement theory and practice with particular relevance to intervention research in nursing, midwifery, and healthcare. This paper begins with an illustration of how random measurement error decreases the power of statistical tests and a review of the roles of sample size and effect size in hypothesis testing. A simple formula is presented and discussed for calculating sample size during the planning stages of intervention studies. Finally, an approach for incorporating reliability estimates into a priori power analyses is introduced and illustrated with a practical example. The approach permits researchers to compare alternative study designs, in terms of their statistical power. An SPSS program is provided to facilitate this approach and to assist researchers in making optimal decisions when choosing among alternative study designs.
The $17.1 billion problem: the annual cost of measurable medical errors.
Van Den Bos, Jill; Rustagi, Karan; Gray, Travis; Halford, Michael; Ziemkiewicz, Eva; Shreve, Jonathan
2011-04-01
At a minimum, high-quality health care is care that does not harm patients, particularly through medical errors. The first step in reducing the large number of harmful medical errors that occur today is to analyze them. We used an actuarial approach to measure the frequency and costs of measurable US medical errors, identified through medical claims data. This method focuses on the analysis of comparative rates of illness, using mathematical models to assess the risk of occurrence and to project costs to the total population. We estimate that the annual cost of measurable medical errors that harm patients was $17.1 billion in 2008. Pressure ulcers were the most common measurable medical error, followed by postoperative infections and by postlaminectomy syndrome, a condition characterized by persistent pain following back surgery. A total of ten types of errors account for more than two-thirds of the total cost of errors, and these errors should be the first targets of prevention efforts.
Improving surface energy balance closure by reducing errors in soil heat flux measurement
Technology Transfer Automated Retrieval System (TEKTRAN)
The flux plate method is the most commonly employed method for measuring soil heat flux (G) in surface energy balance studies. Although relatively simple to use, the flux plate method is susceptible to significant errors. Two of the most common errors are heat flow divergence around the plate and fa...
ERIC Educational Resources Information Center
Feldt, Leonard S.; Qualls, Audrey L.
1999-01-01
Examined the stability of the standard error of measurement and the relationship between the reliability coefficient and the variance of both true scores and error scores for 170 school districts in a state. As expected, reliability coefficients varied as a function of group variability, but the variation in split-half coefficients from school to…
Measuring Articulatory Error Consistency in Children with Developmental Apraxia of Speech
ERIC Educational Resources Information Center
Betz, Stacy K.; Stoel-Gammon, Carol
2005-01-01
Error inconsistency is often cited as a characteristic of children with speech disorders, particularly developmental apraxia of speech (DAS); however, few researchers operationally define error inconsistency and the definitions that do exist are not standardized across studies. This study proposes three formulas for measuring various aspects of…
Stray light errors in spectral colour measurement and two rejection methods
NASA Astrophysics Data System (ADS)
Shen, Haiping; Pan, Jiangen; Feng, Huajun; Liu, Muqing
2009-02-01
The measurement errors caused by stray light of array spectrometers in the spectral colour measurement for light emitting diodes (LEDs) are studied. A stray light correction method and a filter-wheel stray light blocking technology are compared both by simulation and by experiment. The results show that the stray light may cause unacceptable measurement errors. Both the correction method and the filter-wheel technology are very effective in correcting the stray light errors for all the LEDs. The correction method needs infrared filters for white LEDs. An optimized design of the filter wheel is given.
NASA Astrophysics Data System (ADS)
Fratini, G.; McDermitt, D. K.; Papale, D.
2013-08-01
Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (www.licor.com/eddypro).
Measuring the impact of character recognition errors on downstream text analysis
NASA Astrophysics Data System (ADS)
Lopresti, Daniel
2008-01-01
Noise presents a serious challenge in optical character recognition, as well as in the downstream applications that make use of its outputs as inputs. In this paper, we describe a paradigm for measuring the impact of recognition errors on the stages of a standard text analysis pipeline: sentence boundary detection, tokenization, and part-of-speech tagging. Employing a hierarchical methodology based on approximate string matching for classifying errors, their cascading effects as they travel through the pipeline are isolated and analyzed. We present experimental results based on injecting single errors into a large corpus of test documents to study their varying impacts depending on the nature of the error and the character(s) involved. While most such errors are found to be localized, in the worst case some can have an amplifying effect that extends well beyond the site of the original error, thereby degrading the performance of the end-to-end system.
Specification test for Markov models with measurement errors*
Kim, Seonjin; Zhao, Zhibiao
2014-01-01
Most existing works on specification testing assume that we have direct observations from the model of interest. We study specification testing for Markov models based on contaminated observations. The evolving model dynamics of the unobservable Markov chain is implicitly coded into the conditional distribution of the observed process. To test whether the underlying Markov chain follows a parametric model, we propose measuring the deviation between nonparametric and parametric estimates of conditional regression functions of the observed process. Specifically, we construct a nonparametric simultaneous confidence band for conditional regression functions and check whether the parametric estimate is contained within the band. PMID:25346552
Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study
NASA Astrophysics Data System (ADS)
Bogren, W.; Kylling, A.; Burkhart, J. F.
2015-12-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
[Value of Pressure Measurements: Methods and Sources of Errors].
Rüfer, F
2016-07-01
Tonometry is still an essential component of diagnostic testing in glaucoma. Functional and morphological investigations can provide very detailed information about the extent of glaucomatous damage. They are useful in the early detection of glaucoma damage; when damage is manifest, they are useful in estimating the rate of progression in follow-up studies. In contrast, tonometric procedures are much less perfect and sensitive and provide no information at all about the extent of glaucoma damage. However, they often provide the first evidence that glaucoma may be present at all and they are the decisive parameter in controlling surgical or medical treatment to reduce pressure, as the reduction in intraocular pressure (IOD) is still the most common approach in treating glaucoma - in spite of our awareness of numerous other risk factors for glaucoma. There is no reason to doubt that reducing IOD is an effective therapy in many forms of glaucoma, as this has been demonstrated in numerous large epidemiological studies. Tonometric procedures have become more precise in recent years. Goldmann applanation tonometry (GAT) and pneumatonometry are widely used. There are also some areas for which the rarer forms of tonometry can be recommended. Procedures for quasi-continuous pressure measurements and, in the future, these may replace the current approach of measuring IOD at discrete time points. There are a variety of snares in clinical practice, which may lead to misinterpretation and wrong therapeutic decisions, so that these must be repeatedly emphasised. PMID:27130978
Zhang Song; Yau, S.-T
2007-01-01
A structured light system using a digital video projector is widely used for 3D shape measurement. However, the nonlinear {gamma} of the projector causes the projected fringe patterns to be nonsinusoidal, which results in phase error and therefore measurement error. It has been shown that, by using a small look-up table (LUT), this type of phase error can be reduced significantly for a three-step phase-shifting algorithm. We prove that this algorithm is generic for any phase-shifting algorithm. Moreover, we propose a new LUT generation method by analyzing the captured fringe image of a flat board directly. Experiments show that this error compensation algorithm can reduce the phase error to at least 13 times smaller.
Measurement error associated with surveys of fish abundance in Lake Michigan
Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.
2002-01-01
In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.
ERIC Educational Resources Information Center
Henson, Robin K.; Hwang, Dae-Yeop
2002-01-01
Conducted a reliability generalization study of Kolb's Learning Style Inventory (LSI; D. Kolb, 1976). Results for 34 studies indicate that internal consistency and test-retest reliabilities for LSI scores fluctuate considerably and contribute to deleterious cumulative measurement error. (SLD)
ERIC Educational Resources Information Center
Katch, Frank I.; Katch, Victor L.
1980-01-01
Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)
A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis
Jiao, Yan
2016-01-01
Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management.
A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis.
Hatch, Joshua; Jiao, Yan
2016-01-01
Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management.
A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis
Jiao, Yan
2016-01-01
Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963
A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis.
Hatch, Joshua; Jiao, Yan
2016-01-01
Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963
ERIC Educational Resources Information Center
Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu
2015-01-01
There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
On the errors in measuring the particle density by the light absorption method
Ochkin, V. N.
2015-04-15
The accuracy of absorption measurements of the density of particles in a given quantum state as a function of the light absorption coefficient is analyzed. Errors caused by the finite accuracy in measuring the intensity of the light passing through a medium in the presence of different types of noise in the recorded signal are considered. Optimal values of the absorption coefficient and the factors capable of multiplying errors when deviating from these values are determined.
Kim, Yangjin; Hibino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru
2016-08-10
In this research, the susceptibility of the phase-shifting algorithms to the random intensity error is formulated and estimated. The susceptibility of the random intensity error of conventional windowed phase-shifting algorithms is discussed, and the 7N-6 phase-shifting algorithm is developed to minimize the random intensity error using the characteristic polynomial theory. Finally, the surface shape of the transparent wedge plate is measured using a wavelength-tuning Fizeau interferometer and the 7N-6 algorithm. The experimental results indicate that the surface shape measurement accuracy for the transparent plate is 2.5 nm.
Kim, Yangjin; Hibino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru
2016-08-10
In this research, the susceptibility of the phase-shifting algorithms to the random intensity error is formulated and estimated. The susceptibility of the random intensity error of conventional windowed phase-shifting algorithms is discussed, and the 7N-6 phase-shifting algorithm is developed to minimize the random intensity error using the characteristic polynomial theory. Finally, the surface shape of the transparent wedge plate is measured using a wavelength-tuning Fizeau interferometer and the 7N-6 algorithm. The experimental results indicate that the surface shape measurement accuracy for the transparent plate is 2.5 nm. PMID:27534496
NASA Technical Reports Server (NTRS)
Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.
1994-01-01
Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.
Error correction of the DEA (Digital Electronic Automation) Coordinate Measuring Machines at LLNL
Carter, D.L.
1989-11-14
LLNL uses Coordinate Measuring Machines (CMM) manufactured by Digital Electronic Automation, Inc. (DEA) to provide in-process and final measurements of various components as they are assembled and aligned for later experimentation. The machines achieve their accuracy by using real-time passive error compensation to correct for all 21 parametric error components. LLNL does its own parametric testing and downloading of error correction data into the CMM's computer. This paper describes the theory, the parametric tests, the data or map,'' and the final checkout of the machines. 4 refs., 20 figs., 3 tabs.
[Potential errors in measuring tree transpiration based on thermal dissipation method].
Liu, Qing-Xin; Meng, Ping; Zhang, Jin-Song; Gao, Jun; Huang, Hui; Sun, Shou-Jia; Lu, Sen
2011-12-01
Transpiration is a major component of vegetation evapotranspiration, and a core in the study of plant water physiological ecology. Its measurement methods attracted extensive attention, among which, thermal dissipation is considered as an optimal method for measuring tree transpiration. Numerous studies showed that thermal dissipation method was relatively accurate in measuring individual tree transpiration and stand-scale water consumption. However, there exist potential errors between the true value and the measurements during measurement process. In this paper, the potential errors of thermal dissipation method in measuring sap flux density and of the temperature difference determination from single tree to stand-scale were reviewed, and the research prospects on the potential errors of thermal dissipation method in China were discussed. The corresponding solutions were also proposed.
Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.
Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R
2015-01-01
Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications.
Statistical and systematic errors in redshift-space distortion measurements from large surveys
NASA Astrophysics Data System (ADS)
Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.
2012-12-01
We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.
Error Analysis of Cine Phase Contrast MRI Velocity Measurements used for Strain Calculation
Jensen, Elisabeth R.; Morrow, Duane A.; Felmlee, Joel P.; Odegard, Gregory M.; Kaufman, Kenton R.
2014-01-01
Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4mm/s after removal of systematic error – a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3mm/s. Measured random error was between 1 to 1.4mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. PMID:25433567
A scale for measuring the severity of diagnostic errors in accident and emergency departments.
Guly, H R
1997-01-01
OBJECTIVE: To design and test a simple scale for measuring the severity of diagnostic errors occurring in accident and emergency (A&E) departments. METHODS: Empirical design of a scale which indicates the severity of errors on a scale of 1 to 7. It is obtained by adding two scores which indicate the additional treatment which a patient would have received and the follow up which would have been organised if the correct diagnosis had been made initially. RESULTS: The misdiagnosis severity score (MSS) revealed 166 diagnostic errors in injuries treated in an A&E department over one years. The scoring system allowed the more significant errors to be separated from the less significant ones. CONCLUSIONS: The MSS proved useful in describing the errors made in an A&E department. Images Figure 1 PMID:9315928
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2011-05-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2010-12-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
The estimation error covariance matrix for the ideal state reconstructor with measurement noise
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey
2014-05-01
Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data.
van Lummel, Rob C.; Walgaard, Stefan; Hobert, Markus A.; Maetzler, Walter; van Dieën, Jaap H.; Galindo-Garre, Francisca; Terwee, Caroline B.
2016-01-01
Background The “Timed Up and Go” (TUG) is a widely used measure of physical functioning in older people and in neurological populations, including Parkinson’s Disease. When using an inertial sensor measurement system (instrumented TUG [iTUG]), the individual components of the iTUG and the trunk kinematics can be measured separately, which may provide relevant additional information. Objective The aim of this study was to determine intra-rater, inter-rater and test-retest reliability of the iTUG in patients with Parkinson’s Disease. Methods Twenty eight PD patients, aged 50 years or older, were included. For the iTUG the DynaPort Hybrid (McRoberts, The Hague, The Netherlands) was worn at the lower back. The device measured acceleration and angular velocity in three directions at a rate of 100 samples/s. Patients performed the iTUG five times on two consecutive days. Repeated measurements by the same rater on the same day were used to calculate intra-rater reliability. Repeated measurements by different raters on the same day were used to calculate intra-rater and inter-rater reliability. Repeated measurements by the same rater on different days were used to calculate test-retest reliability. Results Nineteen ICC values (15%) were ≥ 0.9 which is considered as excellent reliability. Sixty four ICC values (49%) were ≥ 0.70 and < 0.90 which is considered as good reliability. Thirty one ICC values (24%) were ≥ 0.50 and < 0.70, indicating moderate reliability. Sixteen ICC values (12%) were ≥ 0.30 and < 0.50 indicating poor reliability. Two ICT values (2%) were < 0.30 indicating very poor reliability. Conclusions In conclusion, in patients with Parkinson’s disease the intra-rater, inter-rater, and test-retest reliability of the individual components of the instrumented TUG (iTUG) was excellent to good for total duration and for turning durations, and good to low for the sub durations and for the kinematics of the SiSt and StSi. The results of this fully
Phase error analysis and compensation considering ambient light for phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing
2014-04-01
The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.
Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.
2005-01-01
As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.
Nystrom, E.A.; Oberg, K.A.; Rehmann, C.R.; ,
2002-01-01
Acoustic Doppler current profilers (ADCPs) provide a promising method for measuring surface-water turbulence because they can provide data from a large spatial range in a relatively short time with relative ease. Some potential sources of errors in turbulence measurements made with ADCPs include inaccuracy of Doppler-shift measurements, poor temporal and spatial measurement resolution, and inaccuracy of multi-dimensional velocities resolved from one-dimensional velocities measured at separate locations. Results from laboratory measurements of mean velocity and turbulence statistics made with two pulse-coherent ADCPs in 0.87 meters of water are used to illustrate several of inherent sources of error in ADCP turbulence measurements. Results show that processing algorithms and beam configurations have important effects on turbulence measurements. ADCPs can provide reasonable estimates of many turbulence parameters; however, the accuracy of turbulence measurements made with commercially available ADCPs is often poor in comparison to standard measurement techniques.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.
2012-01-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
Error reduction by combining strapdown inertial measurement units in a baseball stitch
NASA Astrophysics Data System (ADS)
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.
Error reduction methods for integrated-path differential-absorption lidar measurements.
Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T
2012-07-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
NASA Astrophysics Data System (ADS)
Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang; Hwang, Ching-Shiang
2016-08-01
The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.
Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S
2016-02-01
One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors
Effect of patient positions on measurement errors of the knee-joint space on radiographs
NASA Astrophysics Data System (ADS)
Gilewska, Grazyna
2001-08-01
Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.
Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S
2016-02-01
One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors
The Measuring Instrument of Plumb Coaxial Error for Longdistance Orifices Based on Laser Collimation
NASA Astrophysics Data System (ADS)
Liu, B.; Yu, M. Y.
2006-10-01
Introduce the measuring instrument of plumb coaxial error for long-distance orifices which is according to the measuring requests of Flange Place of experiment fast neutron reactor in nuclear power equipment and designed by combining the laser collimation technique and CCD imaging technique. The measuring instrument constructs the plumb line with utilizing the characteristic of laser and making the CDD as imaging screen, and the line is regarded as the datum line in measurement and used for measuring coaxial error of the large orifices' manufacture and assemblage under plumb state. Angle resolving power is: 0.3"; displacement resolving power is: 0.02 mm; respective degree of uncertainty of measurement results are: 0.1"; 0.01 mm. The paper detailed introduces the idiographic design principle and measure method of the measuring instrument, and analyzes the measure error. It is applied to measure the precision of manufacture and the coaxial error of assemblage of the large or heavy pipe casting equipment.
High-Dimensional Explanatory Random Item Effects Models for Rater-Mediated Assessments
ERIC Educational Resources Information Center
Kelcey, Ben; Wang, Shanshan; Cox, Kyle
2016-01-01
Valid and reliable measurement of unobserved latent variables is essential to understanding and improving education. A common and persistent approach to assessing latent constructs in education is the use of rater inferential judgment. The purpose of this study is to develop high-dimensional explanatory random item effects models designed for…
Improving Creativity Performance Assessment: A Rater Effect Examination with Many Facet Rasch Model
ERIC Educational Resources Information Center
Hung, Su-Pin; Chen, Po-Hsi; Chen, Hsueh-Chih
2012-01-01
Product assessment is widely applied in creative studies, typically as an important dependent measure. Within this context, this study had 2 purposes. First, the focus of this research was on methods for investigating possible rater effects, an issue that has not received a great deal of attention in past creativity studies. Second, the…
Determination of error measurement by means of the basic magnetization curve
NASA Astrophysics Data System (ADS)
Lankin, M. V.; Lankin, A. M.
2016-04-01
The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.
A newly conceived cylinder measuring machine and methods that eliminate the spindle errors
NASA Astrophysics Data System (ADS)
Vissiere, A.; Nouira, H.; Damak, M.; Gibaru, O.; David, J.-M.
2012-09-01
Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2009-01-01
Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.
NASA Astrophysics Data System (ADS)
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-01-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707
NASA Astrophysics Data System (ADS)
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-06-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.
Reduction of positional errors in a four-point probe resistance measurement
NASA Astrophysics Data System (ADS)
Worledge, D. C.
2004-03-01
A method for reducing resistance errors due to inaccuracy in the positions of the probes in a collinear four-point probe resistance measurement of a thin film is presented. By using a linear combination of two measurements which differ by interchange of the I- and V- leads, positional errors can be eliminated to first order. Experimental data measured using microprobes show a substantial reduction in absolute error from 3.4% down to 0.01%-0.1%, and an improvement in precision by a factor of 2-4. The application of this technique to the current-in-plane tunneling method to measure electrical properties of unpatterned magnetic tunnel junction wafers is discussed.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
[Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].
Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong
2015-11-01
A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.
Lim, Jun; Rah, Seungyu
2005-06-15
For the precise measurement of the parallelism error between the two crystals in a double crystal monochromator, we suggest a new method that utilizes the pencil beam interferometer. The wave front-splitting pencil beam interferometer was modified, and applied to the measurement. The method overcomes the limitations of the precedent methods that using an autocollimator. Moreover, we can measure the parallelism error continuously through the full scan range with a simple setup. Especially, it should be noted that the angular sensitivity of this method is about 0.07 arcsec rms.
Utilizing measure-based feedback in control-mastery theory: A clinical error.
Snyder, John; Aafjes-van Doorn, Katie
2016-09-01
Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record PMID:27631857
NASA Astrophysics Data System (ADS)
Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.
2016-09-01
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major
Effect of atmospheric radiance errors in radiometric sea-surface skin temperature measurements.
Donlon, C J; Nightingale, T J
2000-05-20
Errors in measurements of sea-surface skin temperature (SSST) caused by inappropriate measurements of sky radiance are discussed; both model simulations and in situ data obtained in the Atlantic Ocean are used. These errors are typically caused by incorrect radiometer view geometry (pointing), temporal mismatches between the sea surface and atmospheric views, and the effect of wind on the sea surface. For clear-sky, overcast, or high-humidity atmospheric conditions, SSST is relatively insensitive (<0.1 K) to sky-pointing errors of ?10 degrees and to temporal mismatches between the sea and sky views. In mixed-cloud conditions, SSST errors greater than ?0.25 K are possible as a result either of poor radiometer pointing or of a temporal mismatch between the sea and sky views. Sea-surface emissivity also changes with sea view pointing angle. Sea view pointing errors should remain below 5 degrees for SSST errors of <0.1 K. We conclude that the clear-sky requirement of satellite infrared SSST observations means that sky-pointing errors are small when one is obtaining in situ SSST validation data at zenith angles of <40 degrees . At zenith angles greater than this, large errors are possible in high-wind-speed conditions. We recommend that high-resolution inclinometer measurements always be used, together with regular alternating sea and sky views, and that the temporal mismatch between sea and sky views be as small as possible. These results have important implications for the development of operational autonomous instruments for determining SSST for the long-term validation of satellite SSST.
Rater Training to Support High-Stakes Simulation-Based Assessments
Feldman, Moshe; Lazzara, Elizabeth H.; Vanderbilt, Allison A.; DiazGranados, Deborah
2013-01-01
Competency-based assessment and an emphasis on obtaining higher-level outcomes that reflect physicians’ ability to demonstrate their skills has created a need for more advanced assessment practices. Simulation-based assessments provide medical education planners with tools to better evaluate the 6 Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) core competencies by affording physicians opportunities to demonstrate their skills within a standardized and replicable testing environment, thus filling a gap in the current state of assessment for regulating the practice of medicine. Observational performance assessments derived from simulated clinical tasks and scenarios enable stronger inferences about the skill level a physician may possess, but also introduce the potential of rater errors into the assessment process. This article reviews the use of simulation-based assessments for certification, credentialing, initial licensure, and relicensing decisions and describes rater training strategies that may be used to reduce rater errors, increase rating accuracy, and enhance the validity of simulation-based observational performance assessments. PMID:23280532
Rater training to support high-stakes simulation-based assessments.
Feldman, Moshe; Lazzara, Elizabeth H; Vanderbilt, Allison A; DiazGranados, Deborah
2012-01-01
Competency-based assessment and an emphasis on obtaining higher-level outcomes that reflect physicians' ability to demonstrate their skills has created a need for more advanced assessment practices. Simulation-based assessments provide medical education planners with tools to better evaluate the 6 Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) core competencies by affording physicians opportunities to demonstrate their skills within a standardized and replicable testing environment, thus filling a gap in the current state of assessment for regulating the practice of medicine. Observational performance assessments derived from simulated clinical tasks and scenarios enable stronger inferences about the skill level a physician may possess, but also introduce the potential of rater errors into the assessment process. This article reviews the use of simulation-based assessments for certification, credentialing, initial licensure, and relicensing decisions and describes rater training strategies that may be used to reduce rater errors, increase rating accuracy, and enhance the validity of simulation-based observational performance assessments. PMID:23280532
Discontinuity, bubbles, and translucence: major error factors in food color measurement
NASA Astrophysics Data System (ADS)
MacDougall, Douglas B.
2002-06-01
Four samples of breakfast cereals exhibiting discontinuity, two samples of baked goods with bubbles and two translucent drinks were measured to show the degree of differences that exist between their colors measured in CIELAB and their visual equivalence to the nearest NCS atlas color. Presentation variables and the contribution of light scatter to the size of the errors were examined.
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan
2012-07-10
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework
Singh, Hardeep; Sittig, Dean F
2015-01-01
Diagnostic errors are major contributors to harmful patient outcomes, yet they remain a relatively understudied and unmeasured area of patient safety. Although they are estimated to affect about 12 million Americans each year in ambulatory care settings alone, both the conceptual and pragmatic scientific foundation for their measurement is under-developed. Health care organizations do not have the tools and strategies to measure diagnostic safety and most have not integrated diagnostic error into their existing patient safety programs. Further progress toward reducing diagnostic errors will hinge on our ability to overcome measurement-related challenges. In order to lay a robust groundwork for measurement and monitoring techniques to ensure diagnostic safety, we recently developed a multifaceted framework to advance the science of measuring diagnostic errors (The Safer Dx framework). In this paper, we describe how the framework serves as a conceptual foundation for system-wide safety measurement, monitoring and improvement of diagnostic error. The framework accounts for the complex adaptive sociotechnical system in which diagnosis takes place (the structure), the distributed process dimensions in which diagnoses evolve beyond the doctor's visit (the process) and the outcomes of a correct and timely “safe diagnosis” as well as patient and health care outcomes (the outcomes). We posit that the Safer Dx framework can be used by a variety of stakeholders including researchers, clinicians, health care organizations and policymakers, to stimulate both retrospective and more proactive measurement of diagnostic errors. The feedback and learning that would result will help develop subsequent interventions that lead to safer diagnosis, improved value of health care delivery and improved patient outcomes. PMID:25589094
ERIC Educational Resources Information Center
Saxton, Emily; Belanger, Secret; Becker, William
2012-01-01
The purpose of this study was to investigate the intra-rater and inter-rater reliability of the Critical Thinking Analytic Rubric (CTAR). The CTAR is composed of 6 rubric categories: interpretation, analysis, evaluation, inference, explanation, and disposition. To investigate inter-rater reliability, two trained raters scored four sets of…
The effects of errors in the measurement of continuous exposure variables on the assessment of risks
Gilbert, E.S.
1988-06-01
Exposure variables in epidemiological studies are seldom measured without error. However, it is unusual for such errors to be taken into account in analyzing data, and thus distortion of results may occur. These distorting effects are evaluated for the fitting of linear and log-linear proportional hazard models based on single continuous exposure variable, and are quantified under several sets of assumptions regarding the conditional distributions of the measured exposures given the true exposures, as well as assumptions regarding the true exposure distributions. For a wide range of assumptions, it is found that the most serious consequence of ignoring error is downward bias in the estimation of regression coefficients. In addition, the shape of the dose-response function may be distorted, and variances of estimated parameters may be underestimated. Except for the case of very large errors combined with skewed exposure distributions, tests of the null hypothesis of no effect that ignore error are found to be nearly as powerful as an optimal test, available if the error structure is known. 19 refs., 3 figs., 12 tabs.
The Influence of Training Phase on Error of Measurement in Jump Performance.
Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B
2016-03-01
The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.
NASA Technical Reports Server (NTRS)
Parrott, T. L.; Smith, C. D.
1977-01-01
The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.
Uncertainties in interpretation of data from turbulent boundary layers due to measurement errors
NASA Astrophysics Data System (ADS)
Vinuesa, Ricardo; Nagib, Hassan
2011-11-01
Composite expansions based on log law and power law were used to generate synthetic velocity profiles of ZPG turbulent boundary layers in the range 800 <= Reθ <= 8 . 6 ×105 . Several artificial errors were then added to the velocity profiles to simulate dispersion in velocity measurements, error in determining probe position and uncertainty in measured skin friction. The effects of the simulated errors were studied by extracting log-law and power-law parameters from all these pseudo-experimental profiles, regardless of their original overlap region description. Various techniques were used, including the diagnostic functions (Ξ and Γ) and direct fits to logarithmic and power laws, to establish a measure of the deviations in the overlap region. The differences between extracted parameters and their expected values are compared for each case, with different magnitudes of error, to reveal when the pseudo-experimental profile leads to ambiguous conclusions; i.e., when parameters extracted for log law and power law are associated with similar levels of deviations. This ambiguity was observed up to Reθ =16,000 for a 3 % dispersion in the velocity measurements and Reθ =2,000 when the skin friction was overestimated by only 2 %. With respect to the error in the probe position, an uncertainty of 400 μm made even the highest Re profile ambiguous. The results from the present study are valid for air flow at atmospheric conditions.
NASA Astrophysics Data System (ADS)
Zhang, Mei; Fei, Yetai; Sheng, Li; Ma, Xiushui; Yang, Hong-tao
2008-12-01
The reasons why the coordinate measuring machine (CMM) dynamic error exists are complicate. And there are many elements which influence the error. So it is hard to build an accurate model. For the sake of attaining a model which not only avoided analyzing complex error sources and the interactions among them, but also solved the multiple colinearity among the variables. This paper adopted the Partial Least-Squares Regression (PLSR) to build model. The model takes 3D coordinates (X, Y, Z) and the moving velocity as the independent variable and takes the CMM dynamic error value as the dependent variable. The experimental results show that the model can be easily explained. At the same time the results show the magnitude and direction of the independent variable influencing the dependent variable.
Systematic errors in the measurement of emissivity caused by directional effects.
Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan
2003-04-01
Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use ofthe infrared 8-14-microm band.This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.
Differential correction technique for removing common errors in gas filter radiometer measurements.
Wallio, H A; Chan, C C; Gormsen, B B; Reichle, H G
1992-12-20
The Measurement of Air Pollution from Satellites (MAPS) gas filter radiometer experiment was designed to measure CO mixing ratios in the Earth's atmosphere. MAPS also measures N(2)O to provide a reference channel for the atmospheric emitting temperature and to detect the presence of clouds. In this paper we formulate equations to correct the radiometric signals based on the spatial and temporal uniformity of the N(2)O mixing ratio in the atmosphere. Results of an error study demonstrate that these equations reduce the error in inferred CO mixing ratios. Subsequent application of the technique to the MAPS 1984 data set decreases the error in the frequency distribution of mixing ratios and increases the number of usable data points.
Spatial regression with covariate measurement error: A semi-parametric approach
Huque, Md Hamidul; Bondell, Howard D.; Carroll, Raymond J.; Ryan, Louise M.
2015-01-01
Summary Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially-defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modelling because of the presence of spatial correlation among the observations. We propose a semi-parametric regression approach to obtain bias corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice. PMID:26788930
NASA Technical Reports Server (NTRS)
Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.; Theriault, J. M.
1991-01-01
The purpose of this paper is to demonstrate the trace gas profiling capabilities of future passive high spectral resolution (1 cm(exp -1) or better) infrared (600 to 2700 cm(exp -1)) satellite tropospheric sounders. These sounders, such as the grating spectrometer, Atmospheric InfRared Sounders (AIRS) (Chahine et al., 1990) and the interferometer, GOES High Resolution Interferometer Sounder (GHIS), (Smith et al., 1991) can provide these unique infrared spectra which enable us to conduct this analysis. In this calculation only the total random retrieval error component is presented. The systematic error components contributed by the forward and inverse model error are not considered (subject of further studies). The total random errors, which are composed of null space error (vertical resolution component error) and measurement error (instrument noise component error), are computed by assuming one wavenumber spectral resolution with wavenumber span from 1100 cm(exp -1) to 2300 cm(exp -1) (the band 600 cm(exp -1) to 1100 cm(exp -1) is not used since there is no major absorption of our three gases here) and measurement noise of 0.25 degree at reference temperature of 260 degree K. Temperature, water vapor, ozone and mixing ratio profiles of nitrous oxide, carbon monoxide and methane are taken from 1976 US Standard Atmosphere conditions (a FASCODE model). Covariance matrices of the gases are 'subjectively' generated by assuming 50 percent standard deviation of gaussian perturbation with respect to their US Standard model profiles. Minimum information and maximum likelihood retrieval solutions are used.
Sources of cumulative continuity in personality: a longitudinal multiple-rater twin study.
Kandler, Christian; Bleidorn, Wiebke; Riemann, Rainer; Spinath, Frank M; Thiel, Wolfgang; Angleitner, Alois
2010-06-01
This study analyzed the etiology of rank-order stability and change in personality over a time period of 13 years in order to explain cumulative continuity with age. NEO five-factor inventory self- and peer report data from 696 monozygotic and 387 dizygotic twin pairs reared together were analyzed using a combination of multiple-rater twin, latent state-trait, and autoregressive simplex models. Correcting for measurement error, this model disentangled genetic and environmental effects on long- and short-term convergent valid stability, on occasional influences, and on self- and peer report-specific stability. Genetic factors represented the main sources that contributed to phenotypic long-term stability of personality in young and middle adulthood, whereas change was predominantly attributable to environmental factors. Phenotypic continuity increased as a function of cumulative environmental effects, which became manifest in stable trait variance and decreasing occasion-specific effects with age. This study's findings suggest a complex interplay between genetic and environmental factors resulting in the typical patterns of continuity in personality across young and middle adulthood.
Impact of measurement error on testing genetic association with quantitative traits.
Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu
2014-01-01
Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10(-5)) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218
Individual Feedback to Enhance Rater Training: Does It Work?
ERIC Educational Resources Information Center
Elder, Cathie; Knoch, Ute; Barkhuizen, Gary; von Randow, Janet
2005-01-01
Research on the utility of feedback to raters in the form of performance reports has produced mixed findings (Lunt, Morton, & Wigglesworth, 1994; Wigglesworth, 1993) and has thus far been trialled only in oral assessment contexts. This article reports on a study investigating raters' attitudes and responsiveness to feedback on their ratings of an…
Training Raters to Assess Adult ADHD: Reliability of Ratings
ERIC Educational Resources Information Center
Adler, Lenard A.; Spencer, Thomas; Faraone, Stephen V.; Reimherr, Fred W.; Kelsey, Douglas; Michelson, David; Biederman, Joseph
2005-01-01
The standardization of ADHD ratings in adults is important given their differing symptom presentation. The authors investigated the agreement and reliability of rater standardization in a large-scale trial of atomoxetine in adults with ADHD. Training of 91 raters for the investigator-administered ADHD Rating Scale (ADHDRS-IV-Inv) occurred prior to…
ERIC Educational Resources Information Center
Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik
2015-01-01
The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…
On the impact of covariate measurement error on spatial regression modelling
Huque, Md Hamidul; Bondell, Howard; Ryan, Louise
2015-01-01
Summary Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD). PMID:25729267
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also
Quantitative analyses of spectral measurement error based on Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin
2015-03-01
The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.
Quantifying Systematic Errors and Total Uncertainties in Satellite-based Precipitation Measurements
NASA Astrophysics Data System (ADS)
Tian, Y.; Peters-Lidard, C. D.
2010-12-01
Determining the uncertainties in precipitation measurements by satellite remote sensing is of fundamental importance to many applications. These uncertainties result mostly from the interplay of systematic errors and random errors. In this presentation, we will summarize our recent efforts in quantifying the error characteristics in satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMaP). For systematic errors, we devised an error decomposition to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals more error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. Our analysis reveals that the six different products share many error features. For example, they all detected strong precipitation (> 40 mm/day) well, but with various biases. They tend to over-estimate in summer and under-estimate in winter. They miss a significant amount of light precipitation (< 10 mm/day). In addition, hit biases and missed precipitation are the two leading error sources. However, their systematic errors also exhibit substantial differences, especially in winter and over rough topography, which greatly contribute to the uncertainties. To estimate the measurement uncertainties, we calculated the measurement spread from the ensemble of these six quasi-independent products. A global map of measurement uncertainties was thus produced. The map yields a global view of the error characteristics and their regional and seasonal variations, and reveals many undocumented error features over areas with no validation data available. The uncertainties are relatively small (40-60%) over the
a Measuring System with AN Additional Channel for Eliminating the Dynamic Error
NASA Astrophysics Data System (ADS)
Dichev, Dimitar; Koev, Hristofor; Louda, Petr
2014-03-01
The present article views a measuring system for determining the parameters of vessels. The system has high measurement accuracy when operating in both static and dynamic mode. It is designed on a gyro-free principle for plotting a vertical. High accuracy of measurement is achieved by using a simplified design of the mechanical module as well by minimizing the instrumental error. A new solution for improving the measurement accuracy in dynamic mode is offered. The approach presented is based on a method where the dynamic error is eliminated in real time, unlike the existing measurement methods and tools where stabilization of the vertical in the inertial space is used. The results obtained from the theoretical experiments, which have been performed on the basis of the developed mathematical model, demonstrate the effectiveness of the suggested measurement approach.
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively. PMID:26840314
Hjarbaek, John; Eshoej, Henrik; Larsen, Camilla Marie; Vobbe, Jette; Juul-Kristensen, Birgit
2016-01-01
Aim To evaluate the inter-rater reliability of measuring structural changes in the tendon of patients, clinically diagnosed with supraspinatus tendinopathy (cases) and healthy participants (controls), on ultrasound (US) images captured by standardised procedures. Methods A total of 40 participants (24 patients) were included for assessing inter-rater reliability of measurements of fibrillar disruption, neovascularity, as well as the number and total length of calcifications and tendon thickness. Linear weighted κ, intraclass correlation (ICC), SEM, limits of agreement (LOA) and minimal detectable change (MDC) were used to evaluate reliability. Results ‘Moderate—almost perfect’ κ was found for grading fibrillar disruption, neovascularity and number of calcifications (k 0.60–0.96). For total length of calcifications and tendon thickness, ICC was ‘excellent’ (0.85–0.90), with SEM(Agreement) ranging from 0.63 to 2.94 mm and MDC(group) ranging from 0.28 to 1.29 mm. In general, SEM, LOA and MDC showed larger variation for calcifications than for tendon thickness. Conclusions Inter-rater reliability was moderate to almost perfect when a standardised procedure was applied for measuring structural changes on captured US images and movie sequences of relevance for patients with supraspinatus tendinopathy. Future studies should test intra-rater and inter-rater reliability of the method in vivo for use in clinical practice, in addition to validation against a gold standard, such as MRI. Trial registration number NCT01984203; Pre-results. PMID:27221128
An error compensation method of laser displacement sensor in the inclined surface measurement
NASA Astrophysics Data System (ADS)
Li, Feng; Xiong, Zhongxing; Li, Bin
2015-10-01
Laser triangulation displacement sensor is an important tool in non-contact displacement measurement which has been widely used in the filed of freeform surface measurement. However, measurement accuracy of such optical sensors is very likely to be influenced by the geometrical shape and face properties of the inspected surfaces. This study presents an error compensation method for the measurement of inclined surfaces using a 1D laser displacement sensor. The effect of the incident angle on the measurement results was investigated by analyzing the laser spot projected on the inclined surface. Both the shape and the light intensity distribution of the spot will be influenced by the incident angle, which lead to the measurement error. As the beam light spot size is different at different measurement position according to Gaussian beam propagating laws, the light spot projectted on the inclinde surface will be an ellipse approximatively. It's important to note that this ellipse isn't full symmetrical because the spot size of Gaussian beam is different at different position. By analyzing the laws of the shape change, the error compensation model can be established. This method is verified through the measurement of an ceramic plane mounted on a high-accuracy 5-axis Mikron UCP 800 Duro milling center. The results show that the method is effective in increasing the measurement accuracy.
Design considerations for case series models with exposure onset measurement error
Mohammed, Sandra M.; Dalrymple, Lorien S.; Şentürk, Damla; Nguyen, Danh V.
2014-01-01
Summary The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared to the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. PMID:22911898
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.
Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal
2016-05-01
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror. PMID:27250374
Observation of spectrum effect on the measurement of intrinsic error field on EAST
NASA Astrophysics Data System (ADS)
Wang, Hui-Hui; Sun, You-Wen; Qian, Jin-Ping; Shi, Tong-Hui; Shen, Biao; Gu, Shuai; Liu, Yue-Qiang; Guo, Wen-Feng; Chu, Nan; He, Kai-Yang; Jia, Man-Ni; Chen, Da-Long; Xue, Min-Min; Ren, Jie; Wang, Yong; Sheng, Zhi-Cai; Xiao, Bing-Jia; Luo, Zheng-Ping; Liu, Yong; Liu, Hai-Qing; Zhao, Hai-Lin; Zeng, Long; Gong, Xian-Zu; Liang, Yun-Feng; Wan, Bao-Nian; The EAST Team
2016-06-01
Intrinsic error field on EAST is measured using the ‘compass scan’ technique with different n = 1 magnetic perturbation coil configurations in ohmically heated discharges. The intrinsic error field measured using a non-resonant dominated spectrum with even connection of the upper and lower resonant magnetic perturbation coils is of the order {{b}r2,1}/{{B}\\text{T}}≃ {{10}-5} and the toroidal phase of intrinsic error field is around {{60}{^\\circ}} . A clear difference between the results using the two coil configurations, resonant and non-resonant dominated spectra, is observed. The ‘resonant’ and ‘non-resonant’ terminology is based on vacuum modeling. The penetration thresholds of the non-resonant dominated cases are much smaller than that of the resonant cases. The difference of penetration thresholds between the resonant and non-resonant cases is reduced by plasma response modeling using the MARS-F code.
Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T
2014-10-20
We report new methods for retrieving atmospheric constituents from symmetrically-measured lidar-sounding absorption spectra. The forward model accounts for laser line-center frequency noise and broadened line-shape, and is essentially linearized by linking estimated optical-depths to the mixing ratios. Errors from the spectral distortion and laser frequency drift are substantially reduced by averaging optical-depths at each pair of symmetric wavelength channels. Retrieval errors from measurement noise and model bias are analyzed parametrically and numerically for multiple atmospheric layers, to provide deeper insight. Errors from surface height and reflectance variations are reduced to tolerable levels by "averaging before log" with pulse-by-pulse ranging knowledge incorporated.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.
Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal
2016-05-01
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.
NASA Astrophysics Data System (ADS)
Lee, Y.; Keehm, Y.
2011-12-01
Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by
Wide-aperture laser beam measurement using transmission diffuser: errors modeling
NASA Astrophysics Data System (ADS)
Matsak, Ivan S.
2015-06-01
Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.
Dong, Zhichao; Cheng, Haobo; Feng, Yunpeng; Su, Jingshi; Wu, Hengyu; Tam, Hon-Yuen
2015-07-01
This study presents a subaperture stitching method to calibrate system errors of several ∼2 m large scale 3D profile measurement instruments (PMIs). The calibration process was carried out by measuring a Φ460 mm standard flat sample multiple times at different sites of the PMI with a length gauge; then the subaperture data were stitched together using a sequential or simultaneous stitching algorithm that minimizes the inconsistency (i.e., difference) of the discrete data in the overlapped areas. The system error can be used to compensate the measurement results of not only large flats, but also spheres and aspheres. The feasibility of the calibration was validated by measuring a Φ1070 mm aspheric mirror, which can raise the measurement accuracy of PMIs and provide more reliable 3D surface profiles for guiding grinding, lapping, and even initial polishing processes. PMID:26193139
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing
Rater Agreement on Interpersonal Psychotherapy Problem Areas
Markowitz, John C.; Leon, Andrew C.; Miller, Nina L.; Cherry, Sabrina; Clougherty, Kathleen F.; Villalobos, Liliana
2000-01-01
There has been much outcome research on interpersonal psychotherapy (IPT) but little investigation of its components. This study assessed interrater reliability of IPT therapists in identifying interpersonal problem areas and treatment foci from audiotapes of initial treatment sessions. Three IPT research psychotherapists assessed up to 18 audiotapes of dysthymic patients, using the Interpersonal Problem Area Rating Scale. Cohen's kappa was used to examine concordance between raters. Kappas for presence or absence of each of the four IPT problem areas were 0.87 (grief), 0.58 (role dispute), 1.0 (role transition), and 0.48 (interpersonal deficits). Kappa for agreement on a clinical focus was 0.82. IPT therapists agreed closely in rating problem areas and potential treatment foci, providing empirical support for potential therapist consistency in this treatment approach. PMID:10896737
Rater agreement on interpersonal psychotherapy problem areas.
Markowitz, J C; Leon, A C; Miller, N L; Cherry, S; Clougherty, K F; Villalobos, L
2000-01-01
There has been much outcome research on interpersonal psychotherapy (IPT) but little investigation of its components. This study assessed interrater reliability of IPT therapists in identifying interpersonal problem areas and treatment foci from audiotapes of initial treatment sessions. Three IPT research psychotherapists assessed up to 18 audiotapes of dysthymic patients, using the Interpersonal Problem Area Rating Scale. Cohen's kappa was used to examine concordance between raters. Kappas for presence or absence of each of the four IPT problem areas were 0.87 (grief), 0.58 (role dispute), 1.0 (role transition), and 0.48 (interpersonal deficits). Kappa for agreement on a clinical focus was 0.82. IPT therapists agreed closely in rating problem areas and potential treatment foci, providing empirical support for potential therapist consistency in this treatment approach. PMID:10896737
Effect of sampling variation on error of rainfall variables measured by optical disdrometer
NASA Astrophysics Data System (ADS)
Liu, X. C.; Gao, T. C.; Liu, L.
2012-12-01
During the sampling process of precipitation particles by optical disdrometers, the randomness of particles and sampling variability has great impact on the accuracy of precipitation variables. Based on a marked point model of raindrop size distribution, the effect of sampling variation on drop size distribution and velocity distribution measurement using optical disdrometers are analyzed by Monte Carlo simulation. The results show that the samples number, rain rate, drop size distribution, and sampling size have different influences on the accuracy of rainfall variables. The relative errors of rainfall variables caused by sampling variation in a descending order as: water concentration, mean diameter, mass weighed mean diameter, mean volume diameter, radar reflectivity factor, and number density, which are independent with samples number basically; the relative error of rain variables are positively correlated with the margin probability, which is also positively correlated with the rain rate and the mean diameter of raindrops; the sampling size is one of the main factors that influence the margin probability, with the decreasing of sampling area, especially the decreasing of short side of sample size, the probability of margin raindrops is getting greater, hence the error of rain variables are getting greater, and the variables of median size raindrops have the maximum error. To ensure the relative error of rainfall variables measured by optical disdrometer less than 1%, the width of light beam should be at least 40 mm.
Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna
2015-05-01
Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.
Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna
2015-01-01
Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010–2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors. PMID:25855646
Theoretical analysis of errors when estimating snow distribution through point measurements
NASA Astrophysics Data System (ADS)
Trujillo, E.; Lehning, M.
2015-06-01
In recent years, marked improvements in our knowledge of the statistical properties of the spatial distribution of snow properties have been achieved thanks to improvements in measuring technologies (e.g., LIDAR, terrestrial laser scanning (TLS), and ground-penetrating radar (GPR)). Despite this, objective and quantitative frameworks for the evaluation of errors in snow measurements have been lacking. Here, we present a theoretical framework for quantitative evaluations of the uncertainty in average snow depth derived from point measurements over a profile section or an area. The error is defined as the expected value of the squared difference between the real mean of the profile/field and the sample mean from a limited number of measurements. The model is tested for one- and two-dimensional survey designs that range from a single measurement to an increasing number of regularly spaced measurements. Using high-resolution (~ 1 m) LIDAR snow depths at two locations in Colorado, we show that the sample errors follow the theoretical behavior. Furthermore, we show how the determination of the spatial location of the measurements can be reduced to an optimization problem for the case of the predefined number of measurements, or to the designation of an acceptable uncertainty level to determine the total number of regularly spaced measurements required to achieve such an error. On this basis, a series of figures are presented as an aid for snow survey design under the conditions described, and under the assumption of prior knowledge of the spatial covariance/correlation properties. With this methodology, better objective survey designs can be accomplished that are tailored to the specific applications for which the measurements are going to be used. The theoretical framework can be extended to other spatially distributed snow variables (e.g., SWE - snow water equivalent) whose statistical properties are comparable to those of snow depth.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the
Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis
ERIC Educational Resources Information Center
Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara
2014-01-01
This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…
ERIC Educational Resources Information Center
Tan Sisman, Gulcin; Aksu, Meral
2016-01-01
The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…
Sensitivity of Force Specifications to the Errors in Measuring the Interface Force
NASA Technical Reports Server (NTRS)
Worth, Daniel
2000-01-01
Force-Limited Random Vibration Testing has been applied in the last several years at the NASA Goddard Space Flight Center (GSFC) and other NASA centers for various programs at the instrument and spacecraft level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the flight environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. This paper will show the effects of some measurement and calibration errors in force gauges. In some cases, the notches in the acceleration spectrum when a random vibration test is performed with measurement errors are the same as the notches produced during a test that has no measurement errors. The paper will also present the results Of tests that were used to validate this effect. Knowing the effect of measurement errors can allow tests to continue after force gauge failures or allow dummy gauges to be used in places that are inaccessible to a force gage.
Multiple imputation to account for measurement error in marginal structural models
Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.
2015-01-01
Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338
Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.
ERIC Educational Resources Information Center
Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.
2001-01-01
Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…
The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.
2015-01-01
Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2011-01-01
This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…
Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure
ERIC Educational Resources Information Center
Padilla, Miguel A.; Veprinsky, Anna
2012-01-01
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
Regularization methods used in error analysis of solar particle spectra measured on SOHO/EPHIN
NASA Astrophysics Data System (ADS)
Kharytonov, A.; Böhm, E.; Wimmer-Schweingruber, R. F.
2009-02-01
Context: The telescope EPHIN (Electron, Proton, Helium INstrument) on the SOHO (SOlar and Heliospheric Observatory) spacecraft measures the energy deposit of solar particles passing through the detector system. The original energy spectrum of solar particles is obtained by regularization methods from EPHIN measurements. It is important not only to obtain the solution of this inverse problem but also to estimate errors or uncertainties of the solution. Aims: The focus of this paper is to evaluate the influence of errors or noise in the instrument response function (IRF) and in the measurements when calculating energy spectra in space-based observations by regularization methods. Methods: The basis of solar particle spectra calculation is the Fredholm integral equation with the instrument response function as the kernel that is obtained by the Monte Carlo technique in matrix form. The original integral equation reduces to a singular system of linear algebraic equations. The nonnegative solution is obtained by optimization with constraints. For the starting value we use the solution of the algebraic problem that is calculated by regularization methods such as the singular value decomposition (SVD) or the Tikhonov methods. We estimate the local errors from special algebraic and statistical equations that are considered as direct or inverse problems. Inverse problems for the evaluation of errors are solved by regularization methods. Results: This inverse approach with error analysis is applied to data from the solar particle event observed by SOHO/EPHIN on day 1996/191. We find that the various methods have different strengths and weaknesses in the treatment of statistical and systematic errors.
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Astrophysics Data System (ADS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-05-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ("baseline"). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17 km at 0.2 mbar. We show the "blind" microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE II. The STOIC results and comparisons are broadly consistent with the formal analysis.
Mass measurement errors caused by 'local" frequency perturbations in FTICR mass spectrometry.
Masselon, Christophe; Tolmachev, Aleksey V; Anderson, Gordon A; Harkewicz, Richard; Smith, Richard D
2002-01-01
One of the key qualities of mass spectrometric measurements for biomolecules is the mass measurement accuracy (MMA) obtained. FTICR presently provides the highest MMA over a broad m/z range. However, due to space charge effects, the achievable MMA crucially depends on the number of ions trapped in the ICR cell for a measurement. Thus, beyond some point, as the effective sensitivity and dynamic range of a measurement increase, MMA tends to decrease. While analyzing deviations from the commonly used calibration law in FTICR we have found systematic errors which are not accounted for by a "global" space charge correction approach. The analysis of these errors and their dependence on charge population and post-excite radius have led us to conclude that each ion cloud experiences a different interaction with other ion clouds. We propose a novel calibration function which is shown to provide an improvement in MMA for all the spectra studied.
Improved error separation technique for on-machine optical lens measurement
NASA Astrophysics Data System (ADS)
Fu, Xingyu; Bing, Guo; Zhao, Qingliang; Rao, Zhimin; Cheng, Kai; Mulenga, Kabwe
2016-04-01
This paper describes an improved error separation technique (EST) for on-machine surface profile measurement which can be applied to optical lenses on precision and ultra-precision machine tools. With only one precise probe and a linear stage, improved EST not only reduces measurement costs, but also shortens the sampling interval, which implies that this method can be used to measure the profile of small-bore lenses. The improved EST with stitching method can be applied to measure the profile of high-height lenses as well. Since the improvement is simple, most of the traditional EST can be modified by this method. The theoretical analysis and experimental results in this paper show that the improved EST eliminates the slide error successfully and generates an accurate lens profile.
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
ERIC Educational Resources Information Center
Harshman, Jordan; Yezierski, Ellen
2016-01-01
Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…
Bradshaw, Corey J A; Sims, David W; Hays, Graeme C
2007-03-01
Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on
Bradshaw, Corey J A; Sims, David W; Hays, Graeme C
2007-03-01
Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on
NASA Astrophysics Data System (ADS)
Qin, Xiage; He, Zhiping; Xu, Rui; Wu, Yu; Shu, Rong
2015-10-01
As a new type of light dispersion device, Acousto-Optic Tunable Filter (AOTF) based on the acousto-optic interaction principle which can achieve diffractive spectral, has rapidly developed and been widely used in the technical fields of spectral analysis and remote sensing detection since it launched. The precise measurement of AOTF's optical performance parameter is the precondition to ensure spectral radiometric calibration and data inversion in the process of quantitation for spectrometer based on AOTF. In this paper, a kind of AOTF performance analysis system in 450~3200nm wide spectrum was introduced, including the fundamental principle of the basic system and the test method of the key optical parameters of AOTF. The error sources and the influence of the magnitude of the error in the whole test system were analyzed and verified emphatically. The numerical simulation of the noise in detecting circuit and the instability of light source was carried out, and based on the simulation result, the method for improving the measuring accuracy of the system were proposed such as improving light source parameters, correcting and changing test method by using dual light path detecting, etc. Experimental results indicate that: the relative error can be reduced by 20%, and the stability of the test signal is better than 98%. Finally, this error analysis model and the potential applicability in other optoelectronic measuring system were also discussed in the paper.
Skin movement errors in measurement of sagittal lumbar and hip angles in young and elderly subjects.
Kuo, Yi-Liang; Tully, Elizabeth A; Galea, Mary P
2008-02-01
Errors in measurement of sagittal lumbar and hip angles due to skin movement on the pelvis and/or lateral thigh were measured in young (n = 21, age = 18.6 +/- 2.1 years) and older (n = 23, age = 70.9 +/- 6.4 years) age groups. Skin reference markers were attached over specific landmarks of healthy young and elderly subjects, who were videotaped in three static positions of hip flexion using the 2D PEAK Motus video analysis system. Sagittal lumbar and hip angles were calculated from skin reference markers and manually palpated landmarks. The elderly subjects demonstrated greater errors in lumbar angle due to skin movement on the pelvis only in the maximal hip flexion position. The traditional model (ASIS-PSIS-GT-LFE) underestimated sagittal hip angle and the revised model (ASIS-PSIS-2/3Th-1/4Th) provided more accurate measurement of sagittal hip angle throughout the full available range of hip flexion. Skin movement on the pelvis had a small counterbalancing effect on the larger errors from lateral thigh markers (GT-LFE), thereby decreasing hip angle error.
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.
Measurement error in two-stage analyses, with application to air pollution epidemiology.
Szpiro, Adam A; Paciorek, Christopher J
2013-12-01
Public health researchers often estimate health effects of exposures (e.g., pollution, diet, lifestyle) that cannot be directly measured for study subjects. A common strategy in environmental epidemiology is to use a first-stage (exposure) model to estimate the exposure based on covariates and/or spatio-temporal proximity and to use predictions from the exposure model as the covariate of interest in the second-stage (health) model. This induces a complex form of measurement error. We propose an analytical framework and methodology that is robust to misspecification of the first-stage model and provides valid inference for the second-stage model parameter of interest. We decompose the measurement error into components analogous to classical and Berkson error and characterize properties of the estimator in the second-stage model if the first-stage model predictions are plugged in without correction. Specifically, we derive conditions for compatibility between the first- and second-stage models that guarantee consistency (and have direct and important real-world design implications), and we derive an asymptotic estimate of finite-sample bias when the compatibility conditions are satisfied. We propose a methodology that (1) corrects for finite-sample bias and (2) correctly estimates standard errors. We demonstrate the utility of our methodology in simulations and an example from air pollution epidemiology.
Measurement and simulation of clock errors from resource-constrained embedded systems
NASA Astrophysics Data System (ADS)
Collett, M. A.; Matthews, C. E.; Esward, T. J.; Whibberley, P. B.
2010-07-01
Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10-6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range.
NASA Technical Reports Server (NTRS)
Akkari, S. H.; Frost, W.
1982-01-01
The effect of rolling motion of a wing on the magnitude of error induced due to the wing vibration when measuring atmospheric turbulence with a wind probe mounted on the wing tip was investigated. The wing considered had characteristics similar to that of a B-57 Cambera aircraft, and Von Karman's cross spectrum function was used to estimate the cross-correlation of atmospheric turbulence. Although the error calculated was found to be less than that calculated when only elastic bendings and vertical motions of the wing are considered, it is still relatively large in the frequency's range close to the natural frequencies of the wing. Therefore, it is concluded that accelerometers mounted on the wing tip are needed to correct for this error, or the atmospheric velocity data must be appropriately filtered.
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, Jorge
2016-03-01
The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.
Some comments on misspecification of priors in Bayesian modelling of measurement error problems.
Richardson, S; Leblond, L
In this paper we discuss some aspects of misspecification of prior distributions in the context of Bayesian modelling of measurement error problems. A Bayesian approach to the treatment of common measurement error situations encountered in epidemiology has been recently proposed. Its implementation involves, first, the structural specification, through conditional independence relationships, of three submodels-a measurement model, an exposure model and a disease model- and secondly, the choice of functional forms for the distributions involved in the submodels. We present some results indicating how the estimation of the regression parameters of interest, which is carried out using Gibbs sampling, can be influenced by a misspecification of the parametric shape of the prior distribution of exposure. PMID:9004392
Error reduction in gamma-spectrometric measurements of nuclear materials enrichment
NASA Astrophysics Data System (ADS)
Zaplatkina, D.; Semenov, A.; Tarasova, E.; Zakusilov, V.; Kuznetsov, M.
2016-06-01
The paper provides the analysis of the uncertainty in determining the uranium samples enrichment using non-destructive methods to ensure the functioning of the nuclear materials accounting and control system. The measurements were performed by a scintillation detector based on a sodium iodide crystal and the semiconductor germanium detector. Samples containing uranium oxide of different masses were used for the measurements. Statistical analysis of the results showed that the maximum enrichment error in a scintillation detector measurement can reach 82%. The bias correction, calculated from the data obtained by the semiconductor detector, reduces the error in the determination of uranium enrichment by 47.2% in average. Thus, the use of bias correction, calculated by the statistical methods, allows the use of scintillation detectors to account and control nuclear materials.
Topping, David J.; Wright, Scott A.
2016-05-04
these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.
Evaluating measurement error in readings of blood pressure for adolescents and young adults.
Bauldry, Shawn; Bollen, Kenneth A; Adair, Linda S
2015-04-01
Readings of blood pressure are known to be subject to measurement error, but the optimal method for combining multiple readings is unknown. This study assesses different sources of measurement error in blood pressure readings and assesses methods for combining multiple readings using data from a sample of adolescents/young adults who were part of a longitudinal epidemiological study based in Cebu, Philippines. Three sets of blood pressure readings were collected at 2-year intervals for 2127 adolescents and young adults as part of the Cebu National Longitudinal Health and Nutrition Study. Multi-trait, multi-method (MTMM) structural equation models in different groups were used to decompose measurement error in the blood pressure readings into systematic and random components and to examine patterns in the measurement across males and females and over time. The results reveal differences in the measurement properties of blood pressure readings by sex and over time that suggest the combination of multiple readings should be handled separately for these groups at different time points. The results indicate that an average (mean) of the blood pressure readings has high validity relative to a more complicated factor-score-based linear combination of the readings. PMID:25548966
Influence of sky radiance measurement errors on inversion-retrieved aerosol properties
Torres, B.; Toledano, C.; Cachorro, V. E.; Bennouna, Y. S.; Fuertes, D.; Gonzalez, R.; Frutos, A. M. de; Berjon, A. J.; Dubovik, O.; Goloub, P.; Podvin, T.; Blarel, L.
2013-05-10
Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90's, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2 Degree-Sign in pointing and 1.2 Degree-Sign field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.
Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements.
Beechem, Thomas; Yates, Luke; Graham, Samuel
2015-04-01
Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
SANG-a kernel density estimator incorporating information about the measurement error
NASA Astrophysics Data System (ADS)
Hayes, Robert
Analyzing nominally large data sets having a measurement error unique to each entry is evaluated with a novel technique. This work begins with a review of modern analytical methodologies such as histograming data, ANOVA, regression (weighted and unweighted) along with various error propagation and estimation techniques. It is shown that by assuming the errors obey a functional distribution (such as normal or Poisson), a superposition of the assumed forms then provides the most comprehensive and informative graphical depiction of the data set's statistical information. The resultant approach is evaluated only for normally distributed errors so that the method is effectively a Superposition Analysis of Normalized Gaussians (SANG). SANG is shown to be easily calculated and highly informative in a single graph from what would otherwise require multiple analysis and figures to accomplish the same result. The work is demonstrated using historical radiochemistry measurements from a transuranic waste geological repository's environmental monitoring program. This work paid for under NRC-HQ-84-14-G-0059.
Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude
NASA Astrophysics Data System (ADS)
Liu, Bingyi; Feng, Changzhong; Liu, Zhishen
2014-11-01
For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.
Tang, Jaw-Luen; Wang, Jian-Neng
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.
Correction for dynamic bias error in transmission measurements of void fraction
NASA Astrophysics Data System (ADS)
Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.
2012-12-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.
Grasso, Benjamin C; Rothschild, Jeffrey M; Jordan, Constance W; Jayaram, Geetha
2005-07-01
Research in the last decade has identified medication errors as a more frequent cause of unintended harm than was previously thought. Inpatient medication errors and error-prone medication usage are detected internally by medication error reporting and externally through hospital licensing and accreditation surveys. A hospital's rate of medication errors is one of several measures of patient safety available to staff. However, prospective patients and other interested parties must rely upon licensing and accreditation scores, along with varying access to outcome data, as their sole measures of patient safety. We have previously reported that much higher rates of medication errors were found when an independent audit was used compared with rates determined by the usual process of self-report. In this study, we summarize these earlier findings and then compare the error detection sensitivity of licensing and accreditation surveys with that of an independent audit. When experienced surveyors fail to detect a highly error prone medication usage system, it raises questions about the validity of survey scores as a measure of safety (i.e., lack of medication errors). Replication of our findings in other hospital settings is needed. We also recommend measures for improving patient safety by reducing error rates and increasing error detection. PMID:16041238
Errors in using two dimensional methods to measure motion about an offset revolute
Hollerbach, K.; Hollister, A.
1996-03-01
2D measurement of human joint motion involves analysis of 3D displacements in an observer selected measurement plane. Accurate marker placement and alignment of joint motion plane with the observer plane are difficult. Alignment of the two planes is essential for accurate recording and understanding of the joint mechanism and the movement about it. In nature, joint axes can exist at any orientation and location relative to a global reference frame. An aritrary axis is any axis that is not coincident with a reference coordinate. We calculate the errors resulting from measuring joint motion about an arbitrary axis using 2D methods.
Sources of resonance-related errors in capacitance versus voltage measurement systems
NASA Astrophysics Data System (ADS)
Polishchuk, Igor; Brown, George; Huff, Howard
2000-10-01
A frequency dependence of the capacitance of metal-oxide-semiconductor devices is often observed in wafer-level probe station measurements for frequencies exceeding 100 kHz. It is well established, however, that the true capacitance value in the SiO2 devices biased into accumulation should remain frequency-independent well into the gigahertz range. Consequently, the apparent frequency dependence of the capacitance versus voltage characteristic may be the result of a resonance present in the measurement setup. We present a quantitative analysis, which can be used to identify the sources of error, characterize a measurement system, and improve the precision of the collected data.
A Qualitative Analysis of Rater Behavior on an L2 Speaking Assessment
ERIC Educational Resources Information Center
Kim, Hyun Jung
2015-01-01
Human raters are normally involved in L2 performance assessment; as a result, rater behavior has been widely investigated to reduce rater effects on test scores and to provide validity arguments. Yet raters' cognition and use of rubrics in their actual rating have rarely been explored qualitatively in L2 speaking assessments. In this study three…
Rater Expertise in a Second Language Speaking Assessment: The Influence of Training and Experience
ERIC Educational Resources Information Center
Davis, Lawrence Edward
2012-01-01
Speaking performance tests typically employ raters to produce scores; accordingly, variability in raters' scoring decisions has important consequences for test reliability and validity. One such source of variability is the rater's level of expertise in scoring. Therefore, it is important to understand how raters' performance is…
Using E-Z Reader to examine the consequences of fixation-location measurement error.
Reichle, Erik D; Drieghe, Denis
2015-01-01
There is an ongoing debate about whether fixation durations during reading are only influenced by the processing difficulty of the words being fixated (i.e., the serial-attention hypothesis) or whether they are also influenced by the processing difficulty of the previous and/or upcoming words (i.e., the attention-gradient hypothesis). This article reports the results of 3 simulations that examine how systematic and random errors in the measurement of fixation locations can generate 2 phenomena that support the attention-gradient hypothesis: parafoveal-on-foveal effects and large spillover effects. These simulations demonstrate how measurement error can produce these effects within the context of a computational model of eye-movement control during reading (E-Z Reader; Reichle, 2011) that instantiates strictly serial allocation of attention, thus demonstrating that these effects do not necessarily provide strong evidence against the serial-attention hypothesis.
Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators
NASA Astrophysics Data System (ADS)
Flowers, Paul A.
1997-07-01
Sophomore analytical chemistry courses typically devote a substantial amount of lecture time to acid/base equilibrium theory, and usually include at least one laboratory project employing potentiometric titrations. In an effort to provide students a laboratory experience that more directly supports their classroom discussions on this important topic, an experiment involving potentiometric measurement of transition ranges and titration errors for common acid/base indicators has been developed. The pH and visually-assessed color of a millimolar strong acid/base system are monitored as a function of added titrant volume, and the resultant data plotted to permit determination of the indicator's transition range and associated titration error. Student response is typically quite positive, and the measured quantities correlate reasonably well to literature values.
Measurement error analysis of the 3D four-wheel aligner
NASA Astrophysics Data System (ADS)
Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun
2013-10-01
Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.
A novel method for measuring transit tilt error in laser trackers
NASA Astrophysics Data System (ADS)
Zhang, Zili; Zhou, Weihu; Zhu, Han; Lin, Xinlong
2015-02-01
A novel method was proposed to measure the tilt error between the transit axis and standing axis of the laser tracker. A gradienter was first used to make the standing axis of the laser tracker perpendicular to the horizontal plane. The laser beam of the tracker was then projected onto a vertical plane set at a certain distance from the tracker with equal horizontal angles and diverse vertical angles in two-face mode. The contrail of the laser beam was recorded while the simulation was manipulated to estimate the beam trail under the same circumstance. The tilt error was thus obtained according to the comparison of the actual result against the simulated one. Experimental results showed that the accuracy of the tilt measuring method could meet the user's demand.
Crainiceanu, Ciprian M; Caffo, Brian S; Di, Chong-Zhi; Punjabi, Naresh M
2009-06-01
We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS.
Magnetic field error measurement of the CEBAF (NIST) wiggler using the pulsed wire method
Wallace, Stephen; Colson, William; Neil, George; Harwood, Leigh
1993-07-01
The National Institute for Science and Technology (NIST) wiggler has been loaded to the Continuous Electron Beam Accelerator Facility (CEBAF). The pulsed wire method [R.W. Warren, Nucl. Instr. and Meth. A272 (1988) 267] has been used to measure the field errors of the entrance wiggler half, and the net path deflection was calculated to be Δx ≈ 5.2 m.
Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.
Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R
2002-06-01
We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.
Xiaoqing, Cheng; Lixin, Yi; Lingling, Liu; Guoqiang, Tang; Zhidong, Wang
2015-11-01
RaDeCC has proved to be a precise and standard way to measure (224)Ra and (223)Ra in water samples and successfully made radium a tracer of several environmental processes. In this paper, the relative errors of (224)Ra and (223)Ra measurement in water samples via a Radium Delayed Coincidence Count system are analyzed through performing coincidence correction calculations and error propagation. The calculated relative errors range of 2.6% ∼ 10.6% for (224)Ra and 9.6% ∼ 14.2% for (223)Ra. For different radium activities, effects of decay days and counting time on final radium relative errors are evaluated and the results show that these relative errors can decrease by adjusting the two measurement factors. Finally, to minimize propagated errors in Radium activity, a set of optimized RaDeCC measurement parameters are proposed.
Xiaoqing, Cheng; Lixin, Yi; Lingling, Liu; Guoqiang, Tang; Zhidong, Wang
2015-11-01
RaDeCC has proved to be a precise and standard way to measure (224)Ra and (223)Ra in water samples and successfully made radium a tracer of several environmental processes. In this paper, the relative errors of (224)Ra and (223)Ra measurement in water samples via a Radium Delayed Coincidence Count system are analyzed through performing coincidence correction calculations and error propagation. The calculated relative errors range of 2.6% ∼ 10.6% for (224)Ra and 9.6% ∼ 14.2% for (223)Ra. For different radium activities, effects of decay days and counting time on final radium relative errors are evaluated and the results show that these relative errors can decrease by adjusting the two measurement factors. Finally, to minimize propagated errors in Radium activity, a set of optimized RaDeCC measurement parameters are proposed. PMID:26233651
Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements
NASA Astrophysics Data System (ADS)
Kappel, David; Haus, Rainer; Arnold, Gabriele
2015-08-01
Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0 - 5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency windows located at 1.02, 1.10, and 1.18 μm. To obtain satisfactory fits to measured spectra, the retrieval pipeline also determines auxiliary parameters describing cloud properties from a certain spectral range. But spectral information content is limited, and emissivity is difficult to retrieve due to strong interferences from other parameters. Based on a selection of representative synthetic VIRTIS-M-IR spectra in the range 1.0 - 2.3 μm, this paper investigates emissivity retrieval errors that can be caused by interferences of atmospheric and surface parameters, by measurement noise, and by a priori data, and which retrieval pipeline leads to minimal errors. Retrieval of emissivity from a single spectrum is shown to fail due to extremely large errors, although the fits to the reference spectra are very good. Neglecting geologic activity, it is suggested to apply a multi-spectrum retrieval technique to retrieve emissivity relative to an initial value as a parameter that is common to several measured spectra that cover the same surface bin. Retrieved emissivity maps of targets with limited extension (a few thousand km) are then additively renormalized to remove spatially large scale deviations from the true emissivity map that are due to spatially slowly varying interfering parameters. Corresponding multi-spectrum retrieval errors are estimated by a statistical scaling of the single-spectrum retrieval errors and are listed for 25 measurement repetitions. For the best of the
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Sensitivity of Force Specifications to the Errors in Measuring the Interface Force
NASA Technical Reports Server (NTRS)
Worth, Daniel
1999-01-01
Force-Limited Random Vibration Testing has been applied in the last several years at NASA/GSFC for various programs at the instrument and system level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the operational environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. A key element in the ability to perform force-limited testing is multi-component force gauges. This paper will show how some measurement and calibration errors in force gauges are compensated for w en tie force specification is calculated. The resulting notches in the acceleration spectrum, when a random vibration test is performed, are the same as the notches produced during an uncompensated test that has no measurement errors. The paper will also present the results of tests that were used to validate this compensation. Knowing that the force specification can compensate for some measurement errors allows tests to continue after force gauge failures or allows dummy gauges to be used in places that are inaccessible.
Evaluation on the probing error of a micro-coordinate measuring machine
NASA Astrophysics Data System (ADS)
Chao, Z. X.; Tan, S. L.; Xu, G.
2008-09-01
Micro-coordinate measuring machines (micro-CMMs) with small probes (φ300 μm or smaller), low probing force and high accuracy working stage have been developed in recent years for three-dimensional (3D) measurement of micro structures. In general, the performance of the micro-CMM depends on the accuracy of its working stage and the probing system. The accuracy of the working stage of a micro CMM can be assessed by laser interferometry to the order of a few tens of nanometers. However, the accuracy of its probing system is difficult to assess due to the small probe size and low probing force. The probing error of a micro-CMM (model F25 by Carl Zeiss) was investigated at our laboratory. The probes used in the system are based on silicon membrane and piezo-resistive elements. The stylus size of the probes ranges from φ120 μm to φ300 μm. The effect of various sources, including the stylus size, on the probing error of the system was evaluated by means of certified precision spheres with reference to ISO 10360-2:2001. Based on the results obtained, possible ways to reduce the probing error are discussed. This is illustrated by the uncertainty analysis of the diameter measurements of a ring gauge using the system.
NASA Technical Reports Server (NTRS)
Fulton, C. L.; Harris, R. L., Jr.
1980-01-01
Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid
NASA Astrophysics Data System (ADS)
Lee, Minho; Cho, Nahm-Gyoo
2013-09-01
A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.
NASA Astrophysics Data System (ADS)
Xiang, Rong
2014-09-01
This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.
NASA Astrophysics Data System (ADS)
Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.
2011-12-01
The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Hao, Jiangang; Koester, Benjamin P.; Mckay, Timothy A.; Rykoff, Eli S.; Rozo, Eduardo; Evrard, August; Annis, James; Becker, Matthew; Busha, Michael; Gerdes, David; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor
2015-11-30
Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important.
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.
An examination of errors in characteristic curve measurements of radiographic screen/film systems.
Wagner, L K; Barnes, G T; Bencomo, J A; Haus, A G
1983-01-01
The precision and accuracy achieved in the measurement of characteristic curves for radiographic screen/film systems is quantitatively investigated for three techniques: inverse square, kVp bootstrap, and step-wedge bootstrap. Precision of all techniques is generally better than +/- 1.5% while the agreement among all intensity-scale techniques is better than 2% over the useful exposure latitude. However, the accuracy of the sensitometry will depend on several factors, including linearity and energy dependence of the calibration instrument, that may introduce larger errors. Comparisons of time-scale and intensity-scale methods are made and a means of measuring reciprocity law failure is demonstrated. PMID:6877185
A method of robot parameters rapid error compensation for online flexible measurement system
NASA Astrophysics Data System (ADS)
Liu, Changjie; Zhang, Zhongkai; Chen, Yiwei
2011-05-01
When the industrial robot is running continuously on-field, the parameters of robot will change because of the effect of robot's self-heat and the changes of the external environment. The repeating position-setting accuracy will reduce, it has great effect on the accuracy of robot flexible coordinate measuring system, which uses industrial robot as the means of delivery, thus the compensation should be adopted. This article shows a rapid robot parameters calibration technology that is based on the constant robot space distance. Through the measurement of the multiple and the same space distance in measurement period, and according to the robot motion model, it solves the changeable parameters of the robot quickly and reverse, and solves the measurement error caused by the robot's parameter variation, and then compensates the final measuring results. Experiments proved that this solution can improve system's accuracy from 0.5mm to 0.18mm above.
A method of robot parameters rapid error compensation for online flexible measurement system
NASA Astrophysics Data System (ADS)
Liu, Changjie; Zhang, Zhongkai; Chen, Yiwei
2010-12-01
When the industrial robot is running continuously on-field, the parameters of robot will change because of the effect of robot's self-heat and the changes of the external environment. The repeating position-setting accuracy will reduce, it has great effect on the accuracy of robot flexible coordinate measuring system, which uses industrial robot as the means of delivery, thus the compensation should be adopted. This article shows a rapid robot parameters calibration technology that is based on the constant robot space distance. Through the measurement of the multiple and the same space distance in measurement period, and according to the robot motion model, it solves the changeable parameters of the robot quickly and reverse, and solves the measurement error caused by the robot's parameter variation, and then compensates the final measuring results. Experiments proved that this solution can improve system's accuracy from 0.5mm to 0.18mm above.
On measurements and their quality: Paper 3: Post hoc pooling and errors of discreteness.
Beckstead, Jason W
2014-03-01
This is the third in a short series of papers on measurement theory and practice with particular relevance to research in nursing, midwifery, and healthcare. In this paper I demonstrate how the decisions we make regarding the post hoc treatment of our measurements impact the quality of our data and influence the validity of the inferences we draw from them. I address two variations of this practice, pooling data over response options found on self-report measures, and transforming measurements of continuous variables, such as age, into ranges or ordered categories. The problems inherent in this practice are examined using concepts from information theory. Pooling more precise measurements into less precise ones creates errors of discreteness that introduce unpredictable (positive or negative) bias in our results.
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
ERIC Educational Resources Information Center
Worts, Diana; Sacker, Amanda; McDonough, Peggy
2010-01-01
This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
The simplified version of Boyle's Law leads to errors in the measurement of thoracic gas volume.
Coates, A L; Desmond, K J; Demizio, D L
1995-09-01
When using Boyle's Law for thoracic gas volume (Vtg) measurement, it is generally assumed that the alveolar pressure (Palv) does not differ from barometric pressure (Pbar) at the start of rarefaction and compression and that the product of the change in volume and pressure (delta P x delta V) is negligibly small. In a gentle panting maneuver in which the difference between Palv and Pbar is small, errors introduced by these assumptions are likely to be small; however, this is not the case when Vtg is measured using a single vigorous inspiratory effort. Discrepancies in the Vtg between the "complex" version of Boyle's Law, which does not ignore delta P x delta V and accounts for large swings in Palv, and the "simplified" version, during both a panting maneuver and a single inspiratory effort were calculated for normal control subjects and patients with cystic fibrosis or asthma. Defining the Vtg from the complete version as "correct," the errors introduced by the simplified version ranged from -3 to +3% for the panting maneuver whereas they ranged from 2 to 9% for the inspiratory maneuver. Using the simplified equation, the Vtg for the inspiratory maneuver was 0.135 +/- 0.237 L greater (p < 0.02) than for the panting maneuver. This discrepancy disappeared when the complete equation was used. While the errors introduced by the use of the simplified version of Boyle's Law are small, they are systematic and unnecessary. PMID:7663807
The effect of clock, media, and station location errors on Doppler measurement accuracy
NASA Technical Reports Server (NTRS)
Miller, J. K.
1993-01-01
Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.
A study of GPS measurement errors due to noise and multipath interference for CGADS
NASA Technical Reports Server (NTRS)
Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.
1996-01-01
This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.
Strain gage measurement errors in the transient heating of structural components
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1993-01-01
Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.
Influence of measurement errors on temperature-based death time determination.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2011-07-01
Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.
Error analysis of Raman differential absorption lidar ozone measurements in ice clouds.
Reichardt, J
2000-11-20
A formalism for the error treatment of lidar ozone measurements with the Raman differential absorption lidar technique is presented. In the presence of clouds wavelength-dependent multiple scattering and cloud-particle extinction are the main sources of systematic errors in ozone measurements and necessitate a correction of the measured ozone profiles. Model calculations are performed to describe the influence of cirrus and polar stratospheric clouds on the ozone. It is found that it is sufficient to account for cloud-particle scattering and Rayleigh scattering in and above the cloud; boundary-layer aerosols and the atmospheric column below the cloud can be neglected for the ozone correction. Furthermore, if the extinction coefficient of the cloud is ?0.1 km(-1), the effect in the cloud is proportional to the effective particle extinction and to a particle correction function determined in the limit of negligible molecular scattering. The particle correction function depends on the scattering behavior of the cloud particles, the cloud geometric structure, and the lidar system parameters. Because of the differential extinction of light that has undergone one or more small-angle scattering processes within the cloud, the cloud effect on ozone extends to altitudes above the cloud. The various influencing parameters imply that the particle-related ozone correction has to be calculated for each individual measurement. Examples of ozone measurements in cirrus clouds are discussed.
NASA Astrophysics Data System (ADS)
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064
Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes
ERIC Educational Resources Information Center
Methe, Scott A.; Briesch, Amy M.; Hulac, David
2015-01-01
At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Random error analysis of marine xCO2 measurements in a coastal upwelling region
NASA Astrophysics Data System (ADS)
Reimer, Janet J.; Cueva, Alejandro; Gaxiola-Castro, Gilberto; Lara-Lara, Ruben; Vargas, Rodrigo
2016-04-01
Quantifying and identifying measurement error is an ongoing challenge for carbon cycle science to constrain measurable uncertainty related to the sources and sinks of CO2. One source of uncertainty in measurements is derived from random errors (ε); thus, it is important to quantify their magnitude and their relationship to environmental variability in order to constrain local-to-global carbon budgets. We applied a paired-observation method to determine ε associated with marine xCO2 in a coastal upwelling zone of an eastern boundary current. Continuous data (3-h resolution) from a mooring platform during upwelling and non-upwelling seasons was analyzed off of northern Baja California in the California Current. To test the rigor of the algorithm to calculate ε we propose a method for determining daily mean time series values that may be affected by ε. To do this we used either two or three variables in the function, but no significant differences for ε mean values were found due to the large variability in ε (-0.088 ± 27 ppm for two variables and -0.057 ± 28 ppm for three variables). Mean ε values were centered on zero, with low values of ε more frequent than greater values, and follow a double exponential distribution. Random error variability increased with higher magnitudes of xCO2, and in general, ε variability increased in relation to upwelling conditions (up to ∼9% of measurements). Increased ε during upwelling suggests the importance of meso-scale processes on ε variability and could have a large influence seasonal to annual CO2 estimates. This approach could be extended and modified to other marine carbonate system variables as part of data quality assurance/quality control and to quantify uncertainty (due to ε) from a wide variety of continuous oceanographic monitoring platforms.
NASA Astrophysics Data System (ADS)
Egertson, Jarrett D.; Eng, Jimmy K.; Bereman, Michael S.; Hsieh, Edward J.; Merrihew, Gennifer E.; MacCoss, Michael J.
2012-12-01
We report an algorithm designed for the calibration of low resolution peptide mass spectra. Our algorithm is implemented in a program called FineTune, which corrects systematic mass measurement error in 1 min, with no input required besides the mass spectra themselves. The mass measurement accuracy for a set of spectra collected on an LTQ-Velos improved 20-fold from -0.1776 ± 0.0010 m/z to 0.0078 ± 0.0006 m/z after calibration (avg ± 95 % confidence interval). The precision in mass measurement was improved due to the correction of non-linear variation in mass measurement accuracy across the m/z range.
Measured and predicted root-mean-square errors in square and triangular antenna mesh facets
NASA Technical Reports Server (NTRS)
Fichter, W. B.
1989-01-01
Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.
Measurement of the Non-common Vertex Error of a Double Corner Cube
NASA Technical Reports Server (NTRS)
Azizi, Alireza; Marcin, Martin; Moore, Douglas; Moser, Steve; Negron, John; Paek, Eung-Gi; Ryan, Daniel; Abramovici, Alex; Best, Paul; Crossfield, Ian; Nemati, Bijan; Neville, Tim; Platt, B.; Wayne, Leonard
2006-01-01
The Space Interferometry Mission (SIM) requires the control of the optical path of each interferometer with picometer accuracy. Laser metrology gauges are used to measure the path lengths to the fiiducial corner cubes at the siderostats. Due to the geometry of SIM a single corner cube does not have sufficient acceptance angle to work with all the gauges. Therefore SIM employs a double corner cube. Current fabrication methods are in fact not capable of producing such a double corner cube with vertices having sufficient commonality. The plan for SIM is to measure the non-commonalty of the vertices and correct for the error in orbit. SIM requires that the non-common vertex error (NCVE) of the double corner cube to be less than 6 (mu)m. The required accuracy for the knowledge of the NCVE is less than 1 (mu)m. This paper explains a method of measuring non-common vertices of a brassboard double corner cube with sub-micron accuracy. The results of such a measurement will be presented.
Noise and measurement errors in a practical two-state quantum bit commitment protocol
NASA Astrophysics Data System (ADS)
Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola
2014-05-01
We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.
Probable errors in width distributions of sea ice leads measured along a transect
NASA Technical Reports Server (NTRS)
Key, J.; Peckham, S.
1991-01-01
The degree of error expected in the measurement of widths of sea ice leads along a single transect are examined in a probabilistic sense under assumed orientation and width distributions, where both isotropic and anisotropic lead orientations are examined. Methods are developed for estimating the distribution of 'actual' widths (measured perpendicular to the local lead orientation) knowing the 'apparent' width distribution (measured along the transect), and vice versa. The distribution of errors, defined as the difference between the actual and apparent lead width, can be estimated from the two width distributions, and all moments of this distribution can be determined. The problem is illustrated with Landsat imagery and the procedure is applied to a submarine sonar transect. Results are determined for a range of geometries, and indicate the importance of orientation information if data sampled along a transect are to be used for the description of lead geometries. While the application here is to sea ice leads, the methodology can be applied to measurements of any linear feature.
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. PMID:26098126
Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin
2015-11-01
A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-01
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-08-05
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
First measurements of error fields on W7-X using flux surface mapping
NASA Astrophysics Data System (ADS)
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; Biedermann, Christoph; Pedersen, Thomas Sunn; the W7-X Team
2016-10-01
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field ‘{\\rlap- \\iota} =1/2 ’ magnetic configuration ({\\rlap- \\iota} =\\iota /2π ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small ∼ 0.04 m intrinsic island chain with a {{130}\\circ} phase relative to the first module of the W7-X experiment. These error fields are determined to be small and easily correctable by the trim coil system. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Arakawa, H.; Kawano, Y.; Itami, K.
2012-10-15
A new method for the comparative verification of electron density measurements obtained with a tangential interferometer and a polarimeter during a discharge is proposed. The possible errors associated with the interferometer and polarimeter are classified by the time required for their identification. Based on the characteristics of the errors, the fringe shift error of the interferometer and the low-frequency noise of the polarimeter were identified and corrected for the JT-60U tangential interferometer/polarimeter system.
Gilbert, E.S.; Fix, J.J.
1996-08-01
This report addresses laboratory measurement error in estimates of external doses obtained from personnel dosimeters, and investigates the effects of these errors on linear dose-response analyses of data from epidemiologic studies of nuclear workers. These errors have the distinguishing feature that they are independent across time and across workers. Although the calculations made for this report were based on Hanford data, the overall conclusions are likely to be relevant for other epidemiologic studies of workers exposed to external radiation.
Hindasageri, V; Vedula, R P; Prabhu, S V
2013-02-01
Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.
Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A
2016-05-15
Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing.
Jamaiyah, H; Geeta, A; Safiza, M N; Khor, G L; Wong, N F; Kee, C C; Rahmah, R; Ahmad, A Z; Suzana, S; Chen, W S; Rajaah, M; Adam, B
2010-06-01
The National Health and Morbidity Survey III 2006 wanted to perform anthropometric measurements (length and weight) for children in their survey. However there is limited literature on the reliability, technical error of measurement (TEM) and validity of these two measurements. This study assessed the above properties of length (LT) and weight (WT) measurements in 130 children age below two years, from the Hospital Universiti Kebangsaan Malaysia (HUKM) paediatric outpatient clinics, during the period of December 2005 to January 2006. Two trained nurses measured WT using Tanita digital infant scale model 1583, Japan (0.01kg) and Seca beam scale, Germany (0.01 kg) and LT using Seca measuring mat, Germany (0.1cm) and Sensormedics stadiometer model 2130 (0.1cm). Findings showed high inter and intra-examiner reliability using 'change in the mean' and 'intraclass correlation' (ICC) for WT and LT. However, LT was found to be less reliable using the 'Bland and Altman plot'. This was also true using Relative TEMs, where the TEM value of LT was slightly more than the acceptable limit. The test instruments were highly valid for WT using 'change in the mean' and 'ICC' but was less valid for LT measurement. In spite of this we concluded that, WT and LT measurements in children below two years old using the test instruments were reliable and valid for a community survey such as NHMS III within the limits of their error. We recommend that LT measurements be given special attention to improve its reliability and validity. PMID:21488474
The Inversion of NMR Log Data Sets with Different Measurement Errors
NASA Astrophysics Data System (ADS)
Dunn, Keh-Jim; LaTorraca, Gerald A.
1999-09-01
We present a composite-data processing method which simultaneously processes two or more data sets with different measurement errors. We examine the role of the noise level of the data in the singular value decomposition inversion process, the criteria for a proper cutoff, and its effect on the uncertainty of the solution. Examples of processed logs using the composite-data processing method are presented and discussed. The possible usefulness of the apparent T1/T2 ratio extracted from the logs is illustrated.
The inversion of NMR log data sets with different measurement errors.
Dunn, K J; LaTorraca, G A
1999-09-01
We present a composite-data processing method which simultaneously processes two or more data sets with different measurement errors. We examine the role of the noise level of the data in the singular value decomposition inversion process, the criteria for a proper cutoff, and its effect on the uncertainty of the solution. Examples of processed logs using the composite-data processing method are presented and discussed. The possible usefulness of the apparent T(1)/T(2) ratio extracted from the logs is illustrated. PMID:10479558
Factor of 2 error in balloon-borne atmospheric conduction current measurements
NASA Technical Reports Server (NTRS)
Few, A. A.; Weinheimer, A. J.
1986-01-01
An exact expression is derived for the atmospheric current to a prolate spheroidal antenna. The effective collection area is considered, obtaining a solution for a spherical antenna and the vertical wire antenna. A factor of two error in the studies of the effective area of an antenna of arbitrary geometry by Kasemir and Ruhnke (1958) and Ogawa (1973) is discussed. The effects of the instrument-atmospheric interaction as they apply to the atmospheric conduction current measurement are considered. Electric field enhancement factors for spheroids and approximate solutions for spheroids and other elongated objects are given.
Research on photoelectric test and measurment for form and position error
NASA Astrophysics Data System (ADS)
Xie, Jinsong
2002-09-01
Structure and principles of a photoelectric test and measurement system for form and position error are described. A special optical system using laser beam characteristics was designed to ensure the uniformity of the scanning speed. To meet the requirements of the system a precision mechanical system a servo-control system and an computing and data processing system are designed. As a result a high-speed efficiency and high precision non-contact auto-test is realized. This is a promotion to the development of "advanced manufacture technology".
Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators
NASA Technical Reports Server (NTRS)
Curtis, H. B.
1976-01-01
Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.
Lyles, Robert H.; Van Domelen, Dane; Mitchell, Emily M.; Schisterman, Enrique F.
2015-01-01
Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934
Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F
2015-11-01
Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934
PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS
Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.
2015-03-10
Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.
Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss
NASA Technical Reports Server (NTRS)
Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.
1981-01-01
Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.
Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718
Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19
ERIC Educational Resources Information Center
Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2008-01-01
Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the variable…
Overview of Measuring Effect Sizes: The Effect of Measurement Error. Brief 2
ERIC Educational Resources Information Center
Boyd, Don; Grossman, Pam; Lankford, Hamp; Loeb, Susanna; Wyckoff, Jim
2008-01-01
The use of value-added models in education research has expanded rapidly. These models allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. An important question is whether such effects are sufficiently large to achieve various policy goals. Judging whether a change in…
On error sources during airborne measurements of the ambient electric field
NASA Technical Reports Server (NTRS)
Evteev, B. F.
1991-01-01
The principal sources of errors during airborne measurements of the ambient electric field and charge are addressed. Results of their analysis are presented for critical survey. It is demonstrated that the volume electric charge has to be accounted for during such measurements, that charge being generated at the airframe and wing surface by droplets of clouds and precipitation colliding with the aircraft. The local effect of that space charge depends on the flight regime (air speed, altitude, particle size, and cloud elevation). Such a dependence is displayed in the relation between the collector conductivity of the aircraft discharging circuit - on one hand, and the sum of all the residual conductivities contributing to aircraft discharge - on the other. Arguments are given in favor of variability in the aircraft electric capacitance. Techniques are suggested for measuring from factors to describe the aircraft charge.
Nam, Seok Hyun; Son, Sung Min; Kwon, Jung Won; Lee, Na Kyung
2013-01-01
[Purpose] Assessment of posture is an important goal of physical therapy interventions for preventing the progression of forward head posture (FHP). The purpose of this study was to determine the inter- and intra-rater reliabilities of the assessment of FHP. [Subjects and Methods] We recruited 45 participants (20 male subjects, 25 female subjects) from a university student population. Two physical therapists assessed FHP using images of head extension. FHP is characterized by the measurement of angles and distances between anatomical landmarks. Forward shoulder angle of 54° or less was defined as FHP. Intra- and inter-rater reliabilities were estimated using Kendall’s Taub correlation coefficients. [Results] Intra-class correlation of intra-rater measurements indicated an excellent level of reliability (0.91), and intra-class correlation of inter-rater measurements showed a good level of reliability in the assessment of FHP (0.75). [Conclusion] Assessment of FHP is an important component of evaluation and affects the design of the treatment regimen. The assessment of FHP was reliably measured by two physical therapists. It could therefore become a useful method for assessing FHP in the clinical setting. Future studies will be needed to provide more detailed quantitative data for accurate assessment of posture. PMID:24259842
Tidhar, Dorit; Armer, Jane M; Deutscher, Daniel; Shyu, Chi-Ren; Azuri, Josef; Madsen, Richard
2015-01-01
Understanding whether a true change has occurred during the process of care is of utmost importance in lymphedema management secondary to cancer treatments. Decisions about when to order a garment, start an exercise program, and begin or end therapy are based primarily on measurements of limb volume, based on circumferences taken by physiotherapists using a flexible tape. This study aimed to assess intra-rater and inter-rater reliability of measurements taken by physiotherapists of legs and arms with and without lymphedema and to evaluate whether there is a difference in reliability when measuring a healthy versus a lymphedematous limb. The intra-rater reliability of arm and leg measurements by trained physiotherapist is very high (scaled standard error of measurements (SEMs) for an arm and a leg volume were 0.82% and 0.64%, respectively) and a cut-point of 1% scaled SEM may be recommended as a threshold for acceptable reliability. Physiotherapists can rely on the same error when assessing lymphedematous or healthy limbs. For those who work in teams and share patients, practice is needed in synchronizing the measurements and regularly monitoring their inter-rater reliability. PMID:26437431
Tidhar, Dorit; Armer, Jane M.; Deutscher, Daniel; Shyu, Chi-Ren; Azuri, Josef; Madsen, Richard
2015-01-01
Understanding whether a true change has occurred during the process of care is of utmost importance in lymphedema management secondary to cancer treatments. Decisions about when to order a garment, start an exercise program, and begin or end therapy are based primarily on measurements of limb volume, based on circumferences taken by physiotherapists using a flexible tape. This study aimed to assess intra-rater and inter-rater reliability of measurements taken by physiotherapists of legs and arms with and without lymphedema and to evaluate whether there is a difference in reliability when measuring a healthy versus a lymphedematous limb. The intra-rater reliability of arm and leg measurements by trained physiotherapist were very high (scaled standard error of measurements (SEMs) for an arm and a leg volume were 0.82% and 0.64%, respectively) and a cut-point of 1% scaled SEM may be recommended as a threshold for acceptable reliability. Physiotherapists can rely on the same error when assessing lymphedematous or healthy limbs. For those who work in teams and share patients, practice is needed in synchronizing the measurements and regularly monitoring their inter-rater reliability. PMID:26437431
2013-01-01
Background Two-dimensional strain measurements obtained by speckle tracking echocardiography (STE) have been reported in both humans and dogs. Incorporation of this technique into canine clinical practice requires the availability of measurements from clinically normal dogs, ideally of the same breed, taken under normal clinical conditions. The aims of this prospective study were to assess if it is possible to obtain STE data during a routine echocardiographic examination in Irish Wolfhound dogs and that these data will provide reference values and an estimation of measurement error. Methods Fifty- four healthy mature Irish Wolfhounds were used. These were scanned under normal clinical conditions to obtain in one session both standard echocardiographic parameters and STE data. Measurement error was determined separately in 5 healthy mature Irish Wolfhounds. Results Eight dogs were rejected by the software algorithm for reasons of image quality, resulting in a total of 46 dogs (85.2%) being included in the statistical analysis. In 46 dogs it was possible to obtain STE data from three scanning planes, as well as to measure the rotation of the left ventricle at two levels and thus calculate the torsion of the heart. The mean peak radial strain at the cardiac apex (RS-apex) was 45.1 ± 10.4% (n = 44), and the mean peak radial strain at the base (RS-base) was 36.9 ± 14.7% (n = 46). The mean peak circumferential strain at the apex (CS-apex) was -24.8 ± 6.2% (n = 44), and the mean peak circumferential strain at the heart base (CS-base) was -15.9 ± 3.2% (n = 44). The mean peak longitudinal strain (LS) was -16.2 ± 3.0% (n = 46). The calculated mean peak torsion of the heart was 11.6 ± 5.1 degrees (n = 45). The measurement error was 24.8%, 26.4%, 11.5%, 6.7%, 9.0% and 10 degrees, for RS-apex, RS-base, CS-apex, CS-base, LS and torsion, respectively. Conclusions It is concluded that this technique can be included in a normal
Background: Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typ...
Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S.; Kratochvil, J. M.; Huffenberger, K. M.; May, M.; AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S.; Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S.; Haiman, Z.; Jernigan, J. G.; and others
2013-09-01
We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
NASA Astrophysics Data System (ADS)
Sawicki, J.; Kowalczyk, M.
2016-06-01
Aim of this study was to appoint values of collimation and horizontal axis errors of the laser scanner ZF 5006h owned by Department of Geodesy and Cartography, Warsaw University of Technology, and then to determine the effect of those errors on the results of measurements. An experiment has been performed, involving measurement of the test field , founded in the Main Hall of the Main Building of the Warsaw University of Technology, during which values of instrumental errors of interest were determined. Then, an universal computer program that automates the proposed algorithm and capable of applying corrections to measured target coordinates or even entire point clouds from individual stations, has been developed.
NASA Astrophysics Data System (ADS)
Wilczynska, Michael R.; Webb, John K.; King, Julian A.; Murphy, Michael T.; Bainbridge, Matthew B.; Flambaum, Victor V.
2015-12-01
We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of 0.4 ≤ zabs ≤ 2.3 observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of Δα/α = (0.22 ± 0.23) × 10-5, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular, we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of Δα/α measurements, thus unnecessarily reducing the overall precision. We further show that fitting absorption systems with too few velocity components also results in a significant increase in the scatter of Δα/α measurements, and in addition causes Δα/α error estimates to be systematically underestimated. These results thus identify some of the potential pitfalls in analysis techniques and provide a guide for future analyses.
The effect of certain rater roles on confidence in physician's assistant ratings.
Dowaliby, F J
1977-11-01
Previous research on the psychology of confidence suggests that the more confident a rater is in his judgment the more accurate is his rating. The purpose of the present study was to investigate possible differences among raters in their confidence in competency ratings which they had provided. Results indicated significant differences due to the rater's interpersonal role with the ratee and the particular aspect of competence rated. Greater simple structure of competence ratings when adjusted for rater confidence is also shown. Rater confidence is discussed as an index for rater selection and as a moderator variable for competence ratings.
NASA Astrophysics Data System (ADS)
Mammarella, I.; Rannik, U.; Ojala, A.; Heiskanen, J. J.; Vesala, T.
2014-12-01
We present eddy covariance fluxes of carbon dioxide, sensible and latent heat measured during an open-water period. The measurements are carried out at Lake Kuivajärvi (Hyytiälä, southern Finland), a small boreal lake (0.63 km2), surrounded by forest. The measurement platform, including the EC system and other auxiliary measurements, is located approximately 1.8 km and 0.8 km from the Northern and Southern shorelines, respectively. Standard quality control criteria were applied and the 30 min fluxes were flagged accordingly (Foken and Wichura, 1996). The steady state test was more effective in removing the CO2 flux than H and LE fluxes, the fraction of non-stationary records being 35%, 7% and 5% respectively. Similarly, the total relative random error (ΔF) for CO2 flux was about twice that estimated for energy fluxes. Median value of random error (δF) for CO2 was 0.14 μmol m-2 s-1, corresponding to 26% of the observed flux. If only 30 min periods with the best quality (flag=0) are included, median values of δF and ΔF were 0.11 μmol m-2 s-1 and 20% respectively, showing that, on average, more conservative flux quality criteria lead to lower flux random uncertainty. The estimated values of ΔF are close to the ones reported in other ecosystems (e.g. Finkeisten and Sims, 2001). However, different diurnal course of random errors was found for energy and CO2 fluxes over the lake and the surrounding forest. The implications on extending standard quality criteria (including also those based on friction velocity, atmospheric stability, etc, which are routinely used for land-based flux towers) to EC flux measurements over freshwater ecosystems are further analysed and discussed. ReferencesAubinet et al, 2012, Springer Foken and Wichura, 1996, Agric. For. Meteorol., 78, 83-105 Finkeisten and Sims, 2001, J. Geophys. Res. -Atmos., 106, 3503-3509
McNamee, R
2005-10-01
This paper addresses optimal design and efficiency of two-phase (2P) case-control studies in which the first phase uses an error-prone exposure measure, Z, while the second phase measures true, dichotomous exposure, X, in a subset of subjects. Optimal design of a separate second phase, to be added to a preexisting study, is also investigated. Differential misclassification is assumed throughout. Results are also applicable to 2P cohort studies with error-prone and error-free measures of disease status but error-free exposure measures. While software based on the mean score method of Reilly and Pepe (1995, Biometrika 82, 299--314) can find optimal designs given pilot data, the lack of simple formulae makes it difficult to generalize about efficiency compared to one-phase (1P) studies based on X alone. Here, formulae for the optimal ratios of cases to controls and first- to second-phase sizes, and the optimal second-phase stratified sampling fractions, given a fixed budget, are given. The maximum efficiency of 2P designs compared to a 1P design is deduced and is shown to be bounded from above by a function of the sensitivities and specificities of Z. The efficiency of 'balanced' separate second-phase designs (Breslow and Cain, 1988, Biometrika 75, 11--20)-in which equal numbers of subjects are chosen from each first-phase strata-compared to optimal design is deduced, enabling situations where balanced designs are nearly optimal to be identified.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data
2013-01-01
Background Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Methods Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003–2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). Results When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2
Pustovitov, V. D.
2008-01-15
The possibility is discussed of determining the amplitude and phase of a static resonant error field in a tokamak by means of dynamic magnetic measurements. The method proposed assumes measuring the plasma response to a varying external helical magnetic field with a small (a few gauss) amplitude. The case is considered in which the plasma is probed by square pulses with a duration much longer than the time of the transition process. The plasma response is assumed to be linear, with a proportionality coefficient being dependent on the plasma state. The analysis is carried out in a standard cylindrical approximation. The model is based on Maxwell's equations and Ohm's law and is thus capable of accounting for the interaction of large-scale modes with the conducting wall of the vacuum chamber. The method can be applied to existing tokamaks.
Determination of instrumentation errors from measured data using maximum likelihood method
NASA Technical Reports Server (NTRS)
Keskar, D. A.; Klein, V.
1980-01-01
The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.
Three-dimensional shape optical measurement using constant gap control and error compensation
Park, Kyihwan; Kim, Sangyoo; Choi, Kyosoon
2008-03-15
The optical laser displacement sensor is widely used for noncontact measurement of the three-dimensional (3D) shape profile of the object surface. When the surface of an object has a slope variation, the sensor gain is proportionally varied according to that of the object surface. In order to solve the sensor gain variation problem, the constant gap control method is applied to adjust the gap to the nominal distance. Control error compensation is also proposed to cope with the situation even when the gap is not perfectly controlled to the nominal distance using an additional sensor attached to the actuator. 3D shape measurement applying the proposed constant gap control method shows better performances rather than the constant sensor height method.
Sim, Jae Hoon; Lauxmann, Michael; Chatzimichalis, Michail; Röösli, Christof; Eiber, Albrecht; Huber, Alexander M
2010-12-01
Previous studies have suggested complex modes of physiological stapes motions based upon various measurements. The goal of this study was to analyze the detailed errors in measurement of the complex stapes motions using laser Doppler vibrometer (LDV) systems, which are highly sensitive to the stimulation intensity and the exact angulations of the stapes. Stapes motions were measured with acoustic stimuli as well as mechanical stimuli using a custom-made three-axis piezoelectric actuator, and errors in the motion components were analyzed. The ratio of error in each motion component was reduced by increasing the magnitude of the stimuli, but the improvement was limited when the motion component was small relative to other components. This problem was solved with an improved reflectivity on the measurement surface. Errors in estimating the position of the stapes also caused errors on the coordinates of the measurement points and the laser beam direction relative to the stapes footplate, thus producing errors in the 3-D motion components. This effect was small when the position error of the stapes footplate did not exceed 5 degrees.
Internal errors of ground-based terrestrial earthshine measurements in 5 colour bands.
NASA Astrophysics Data System (ADS)
Thejll, Peter; Gleisner, Hans; Flynn, Chris
2015-04-01
Measurements of earthshine intensity could be an important complement to satellite-based observations of terrestrial visual and near-IR radiative budgets because they are independent and relatively inexpensive to obtain and also offer different potentials for long-term bias stability. Using ground-based photometric instruments, the Moon is imaged several times a night through a range of photometric filters, and the ratio of the intensities of the dark (Earth-lit) and bright (Sun-lit) sides is calculated - this ratio is proportional to terrestrial albedo. Using forward modelling of the expected ratio, given assumptions about reflectance, single-scattering albedo, and light-scattering processes it is possible to deduce the terrestrial albedo. In this poster we present multicolour photometric results from observations on 10 nights, obtained at the NOAA observatory on Mauna Loa, Hawaii, in 2011. The Moon had different phases on these nights and we discuss in detail the behaviour of internal errors as a function of phase. The internal error is dependent on the photon-statistics of the images obtained and its magnitude is investigated by use of bootstrapping with replacement of observations. Results indicate that standard Johnson B and V band equivalent Lambert albedos can be obtained with precisions (1 standard deviation) in the 0.1 to 1% range for phases between 40 and 90 degrees. For longer wavelengths, corresponding to broader bands on either side of the 'Vegetation edge' at 750nm, we see larger variability in the albedo determinations and discuss whether these are due to atmospheric conditions or represent fast, intrinsic terrestrial albedo variations. The accuracy of these results, however, appear to depend on method choices, in particular the choice of lunar reflectance model -- this 'external error' will be investigated in future analyses.
Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi
2013-06-01
A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has
Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi
2013-06-01
A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has
Errors in acoustic doppler profiler velocity measurements caused by flow disturbance
Mueller, D.S.; Abad, J.D.; Garcia, C.M.; Gartner, J.W.; Garcia, M.H.; Oberg, K.A.
2007-01-01
Acoustic Doppler current profilers (ADCPs) are commonly used to measure streamflow and water velocities in rivers and streams. This paper presents laboratory, field, and numerical model evidence of errors in ADCP measurements caused by flow disturbance. A state-of-the-art three-dimensional computational fluid dynamic model is validated with and used to complement field and laboratory observations of flow disturbance and its effect on measured velocities. Results show that near the instrument, flow velocities measured by the ADCP are neither the undisturbed stream velocity nor the velocity of the flow field around the ADCP. The velocities measured by the ADCP are biased low due to the downward flow near the upstream face of the ADCP and upward recovering flow in the path of downstream transducer, which violate the flow homogeneity assumption used to transform beam velocities into Cartesian velocity components. The magnitude of the bias is dependent on the deployment configuration, the diameter of the instrument, and the approach velocity, and was observed to range from more than 25% at 5cm from the transducers to less than 1% at about 50cm from the transducers for the scenarios simulated. ?? 2007 ASCE.
Satpute, Kiran; Hall, Toby; Kumar, Senthil; Deodhar, Ankeeta
2016-10-01
Shoulder hand behind back (HBB) range of motion (ROM) is a useful measure of impairment and treatment outcome. The purpose of this repeated measures study was to identify inter- and intra-rater reliability, of a new simplified method of measuring HBB ROM. Two experienced raters measured HBB ROM with a bubble inclinometer on 25 people (aged 42-75 years, 14 female) with unilateral shoulder dysfunction and 25 age- and gender-matched asymptomatic subjects on two different occasions. Statistical analysis included calculation of intra-class correlation coefficients (ICCs), minimal detectable change (MDC), standard error of measurement (SEM), Pearson correlation coefficient (r), coefficient of determination (R(2)), and the lower bound score. Mean HBB ROM was 108.6° (SD = 16.3) and 23.9° (SD = 10.5) on the pain-free and symptomatic side, respectively. Both intra-rater and inter-rater reliability were high (ICC > 0.80). For asymptomatic people the SEM was at most 3° and MDC was 8° with a strong correlation between the dominant and nondominant sides (r > 0.72). The mean absolute values and lower bound scores were at most 10.2° and 26.0°, respectively. These results indicate that this new and novel method of measuring HBB ROM is accurate, has good inter- and intra-rater reliability, and provides normal values for between-limb ROM variability. PMID:27618126
Satpute, Kiran; Hall, Toby; Kumar, Senthil; Deodhar, Ankeeta
2016-10-01
Shoulder hand behind back (HBB) range of motion (ROM) is a useful measure of impairment and treatment outcome. The purpose of this repeated measures study was to identify inter- and intra-rater reliability, of a new simplified method of measuring HBB ROM. Two experienced raters measured HBB ROM with a bubble inclinometer on 25 people (aged 42-75 years, 14 female) with unilateral shoulder dysfunction and 25 age- and gender-matched asymptomatic subjects on two different occasions. Statistical analysis included calculation of intra-class correlation coefficients (ICCs), minimal detectable change (MDC), standard error of measurement (SEM), Pearson correlation coefficient (r), coefficient of determination (R(2)), and the lower bound score. Mean HBB ROM was 108.6° (SD = 16.3) and 23.9° (SD = 10.5) on the pain-free and symptomatic side, respectively. Both intra-rater and inter-rater reliability were high (ICC > 0.80). For asymptomatic people the SEM was at most 3° and MDC was 8° with a strong correlation between the dominant and nondominant sides (r > 0.72). The mean absolute values and lower bound scores were at most 10.2° and 26.0°, respectively. These results indicate that this new and novel method of measuring HBB ROM is accurate, has good inter- and intra-rater reliability, and provides normal values for between-limb ROM variability.