ERIC Educational Resources Information Center
Kachchaf, Rachel; Solano-Flores, Guillermo
2012-01-01
We examined how rater language background affects the scoring of short-answer, open-ended test items in the assessment of English language learners (ELLs). Four native English and four native Spanish-speaking certified bilingual teachers scored 107 responses of fourth- and fifth-grade Spanish-speaking ELLs to mathematics items administered in…
How Good Are Our Raters? Rater Errors in Clinical Skills Assessment
ERIC Educational Resources Information Center
Iramaneerat, Cherdsak; Yudkowsky, Rachel
2006-01-01
A multi-faceted Rasch measurement (MFRM) model was used to analyze a clinical skills assessment of 173 fourth-year medical students in a Midwestern medical school to investigate four types of rater errors: leniency, inconsistency, halo, and restriction of range. Each student performed six clinical tasks with six standardized patients (SPs), who…
Beltran-Alacreu, Hector; López-de-Uralde-Villanueva, Ibai; Paris-Alemany, Alba; Angulo-Díaz-Parreño, Santiago; La Touche, Roy
2014-01-01
[Purpose] The aim of this study was to determine the inter-rater and intra-rater reliability of the mandibular range of motion (ROM) considering the neutral craniocervical position when performing the measurements. [Subjects and Methods] The sample consisted of 50 asymptomatic subjects. Two raters measured four mandibular ROMs (maximal mouth opening (MMO), laterals, and protrusion) using the craniomandibular scale. Subjects alternated between raters, receiving two complete trials per day, two days apart. Intra- and inter-rater reliability was determined using intra-class correlation coefficients (ICCs). Bland-Altman analysis was used to assess reliability, bias, and variability. Finally, the standard error of measurement (SEM) and minimal detectable change (MDC) were analyzed to measure responsiveness. [Results] Reliability was good for MMO (inter-rater, ICC= 0.95−0.96; intra-rater, ICC= 0.95−0.96) and for protrusion (inter-rater, ICC= 0.92−0.94; intra-rater, ICC= 0.93−0.96). Reliability was moderate for lateral excursions. The MMO and protrusion SEM ranged from 0.74 to 0.82 mm and from 0.29 to 0.49 mm, while the MDCs ranged from 1.73 to 1.91 mm and from 0.69 to 0.14 mm respectively. The analysis showed no random or systematic error, suggesting that effect learning did not affect reliability. [Conclusion] A standardized protocol for assessment of mandibular ROM in a neutral craniocervical position obtained good inter- and intra-rater reliability for MMO and protrusion and moderate inter- and intra-rater reliability for lateral excursions. PMID:25013296
Examining rating quality in writing assessment: rater agreement, error, and accuracy.
Wind, Stefanie A; Engelhard, George
2012-01-01
The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.
Do Raters Demonstrate Halo Error When Scoring a Series of Responses?
ERIC Educational Resources Information Center
Ridge, Kirk
This study investigated whether raters in two different training groups would demonstrate halo error when each rater scored all five responses to five different mathematics performance-based items from each student. One group of 20 raters was trained by an experienced scoring director with item-specific scoring rubrics and the opportunity to…
ERIC Educational Resources Information Center
Sheehan, Dwayne P.; Lafave, Mark R.; Katz, Larry
2011-01-01
This study was designed to test the intra- and inter-rater reliability of the University of North Carolina's Balance Error Scoring System in 9- and 10-year-old children. Additionally, a modified version of the Balance Error Scoring System was tested to determine if it was more sensitive in this population ("raw scores"). Forty-six…
Longitudinal Rater Modeling with Splines
ERIC Educational Resources Information Center
Dobria, Lidia
2011-01-01
Performance assessments rely on the expert judgment of raters for the measurement of the quality of responses, and raters unavoidably introduce error in the scoring process. Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, even after accounting for differences in examinee…
Lang, W Steve; Wilkerson, Judy R; Rea, Dorothy C; Quinn, David; Batchelder, Heather L; Englehart, Dierdre S; Jennings, Kelly J
2014-01-01
The purpose of this study was to examine the extent to which raters' subjectivity impacts measures of teacher dispositions using the Dispositions Assessments Aligned with Teacher Standards (DAATS) battery. This is an important component of the collection of evidence of validity and reliability of inferences made using the scale. It also provides needed support for the use of subjective affective measures in teacher training and other professional preparation programs, since these measures are often feared to be unreliable because of rater effect. It demonstrates the advantages of using the Multi-Faceted Rasch Model as a better alternative to the typical methods used in preparation programs, such as Cohen's Kappa. DAATS instruments require subjective scoring using a six-point rating scale derived from the affective taxonomy as defined by Krathwohl, Bloom, and Masia (1956). Rater effect is a serious challenge and can worsen or drift over time. Errors in rater judgment can impact the accuracy of ratings, and these effects are common, but can be lessened through training of raters and monitoring of their efforts. This effort uses the multifaceted Rasch measurement models (MFRM) to detect and understand the nature of these effects.
Tous-Fajardo, Julio; Moras, Gerard; Rodríguez-Jiménez, Sergio; Usach, Robert; Doutres, Daniel Moreno; Maffiuletti, Nicola A
2010-08-01
Tensiomyography (TMG) is a relatively novel technique to assess muscle mechanical response based on radial muscle belly displacement consecutive to a single electrical stimulus. Although intra-session reliability has been found to be good, inter-rater reliability and the influence of sensor repositioning and electrodes placement on TMG measurements is unknown. The purpose of this study was to analyze the inter-rater reliability of vastus medialis muscle contractile property measurements obtained with TMG as well as the effect of inter-electrode distance (IED). Five contractile parameters were analyzed from vastus medialis muscle belly displacement-time curves: maximal displacement (Dm), contraction time (Tc), sustain time (Ts), delay time (Td), and half-relaxation time (Tr). The inter-rater reliability and IED effect on these measurements were evaluated in 18 subjects. Intra-class correlation coefficients, standard errors of measurement, Bland and Altman systematic bias and random error as well as coefficient of variations were used as measures of reliability. Overall, a good to excellent inter-rater reliability was found for all contractile parameters, except Tr, which showed insufficient reliability. Alterations in IED significantly affected Dm with a trend for all the other parameters. The present results legitimate the use of TMG for the assessment of vastus medialis muscle contractile properties, particularly for Dm and Tc. It is recommended to avoid Tr quantification and IED modifications during multiple TMG measurements.
Bodilsen, Ann Christine; Juul-Larsen, Helle Gybel; Petersen, Janne; Beyer, Nina; Andersen, Ove; Bandholm, Thomas
2015-01-01
Objective Physical performance measures can be used to predict functional decline and increased dependency in older persons. However, few studies have assessed the feasibility or reliability of such measures in hospitalized older patients. Here we assessed the feasibility and inter-rater reliability of four simple measures of physical performance in acutely admitted older medical patients. Design During the first 24 hours of hospitalization, the following were assessed twice by different raters in 52 (≥ 65 years) patients admitted for acute medical illness: isometric hand grip strength, 4-meter gait speed, 30-s chair stand and Cumulated Ambulation Score. Relative reliability was expressed as weighted kappa for the Cumulated Ambulation Score or as intra-class correlation coefficient (ICC1,1) and lower limit of the 95%-confidence interval (LL95%) for grip strength, gait speed, and 30-s chair stand. Absolute reliability was expressed as the standard error of measurement and the smallest real difference as a percentage of their respective means (SEM% and SRD%). Results The primary reasons for admission of the 52 included patients were infectious disease and cardiovascular illness. The mean± SD age was 78±8.3 years, and 73.1% were women. All patients performed grip strength and Cumulated Ambulation Score testing, 81% performed the gait speed test, and 54% completed the 30-s chair stand test (46% were unable to rise without using the armrests). No systematic bias was found between first and second tests or between raters. The weighted kappa for the Cumulated Ambulation Score was 0.76 (0.60–0.92). The ICC1,1 values were as follows: grip strength, 0.95 (LL95% 0.92); gait speed, 0.92 (LL95% 0.73), and 30-s chair stand, 0.82 (LL95% 0.67). The SEM% values for grip strength, gait speed, and 30-s chair stand were 8%, 7%, and 18%, and the SRD95% values were 22%, 17%, and 49%. Conclusion In acutely admitted older medical patients, grip strength, gait speed, and the
Inter-Rectus Distance Measurement Using Ultrasound Imaging: Does the Rater Matter?
Keshwani, Nadia; Hills, Nicole; McLean, Linda
2016-01-01
Purpose: To investigate the interrater reliability of inter-rectus distance (IRD) measured from ultrasound images acquired at rest and during a head-lift task in parous women and to establish the standard error of measurement (SEM) and minimal detectable change (MDC) between two raters. Methods: Two physiotherapists independently acquired ultrasound images of the anterior abdominal wall from 17 parous women and measured IRD at four locations along the linea alba: at the superior border of the umbilicus, at 3 cm and 5 cm above the superior border of the umbilicus, and at 3 cm below the inferior border of the umbilicus. The interrater reliability of the IRD measurements was determined using intra-class correlation coefficients (ICCs). Bland-Altman analyses were used to detect bias between the raters, and SEM and MDC values were established for each measurement site. Results: When the two raters performed their own image acquisition and processing, ICCs(3,5) ranged from 0.72 to 0.91 at rest and from 0.63 to 0.96 during head lift, depending on the anatomical measurement site. Bland-Altman analyses revealed no systematic bias between the raters. SEM values ranged from 0.23 cm to 0.71 cm, and MDC values ranged from 0.64 cm to 1.97 cm. Conclusion: When using ultrasound imaging to measure IRD in women, it is acceptable for different therapists to compare IRDs between patients and within patients over time if IRD is measured above or below the umbilicus. Interrater reliability of IRD measurement is poorest at the level of the superior border of the umbilicus.
Measuring the Impact of Rater Negotiation in Writing Performance Assessment
ERIC Educational Resources Information Center
Trace, Jonathan; Janssen, Gerriet; Meier, Valerie
2017-01-01
Previous research in second language writing has shown that when scoring performance assessments even trained raters can exhibit significant differences in severity. When raters disagree, using discussion to try to reach a consensus is one popular form of score resolution, particularly in contexts with limited resources, as it does not require…
Intra and inter-rater reliability study of pelvic floor muscle dynamometric measurements
Martinho, Natalia M.; Marques, Joseane; Silva, Valéria R.; Silva, Silvia L. A.; Carvalho, Leonardo C.; Botelho, Simone
2015-01-01
OBJECTIVE: The aim of this study was to evaluate the intra and inter-rater reliability of pelvic floor muscle (PFM) dynamometric measurements for maximum and average strengths, as well as endurance. METHOD: A convenience sample of 18 nulliparous women, without any urogynecological complaints, aged between 19 and 31 (mean age of 25.4±3.9) participated in this study. They were evaluated using a pelvic floor dynamometer based on load cell technology. The dynamometric evaluations were repeated in three successive sessions: two on the same day with a rest period of 30 minutes between them, and the third on the following day. All participants were evaluated twice in each session; first by examiner 1 followed by examiner 2. The vaginal dynamometry data were analyzed using three parameters: maximum strength, average strength, and endurance. The Intraclass Correlation Coefficient (ICC) was applied to estimate the PFM dynamometric measurement reliability, considering a good level as being above 0.75. RESULTS: The intra and inter-raters' analyses showed good reliability for maximum strength (ICCintra-rater1=0.96, ICCintra-rater2=0.95, and ICCinter-rater=0.96), average strength (ICCintra-rater1=0.96, ICCintra-rater2=0.94, and ICCinter-rater=0.97), and endurance (ICCintra-rater1=0.88, ICCintra-rater2=0.86, and ICCinter-rater=0.92) dynamometric measurements. CONCLUSIONS: The PFM dynamometric measurements showed good intra- and inter-rater reliability for maximum strength, average strength and endurance, which demonstrates that this is a reliable device that can be used in clinical practice. PMID:25993624
A Simulation Study of Rater Agreement Measures with 2x2 Contingency Tables
ERIC Educational Resources Information Center
Ato, Manuel; Lopez, Juan Jose; Benavente, Ana
2011-01-01
A comparison between six rater agreement measures obtained using three different approaches was achieved by means of a simulation study. Rater coefficients suggested by Bennet's [sigma] (1954), Scott's [pi] (1955), Cohen's [kappa] (1960) and Gwet's [gamma] (2008) were selected to represent the classical, descriptive approach, [alpha] agreement…
Approximate measurement invariance in cross-classified rater-mediated assessments
Kelcey, Ben; McGinn, Dan; Hill, Heather
2014-01-01
An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity. PMID:25566145
Measurement Error. For Good Measure....
ERIC Educational Resources Information Center
Johnson, Stephen; Dulaney, Chuck; Banks, Karen
No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…
Analysis of Rater Severity on Written Expression Exam Using Many Faceted Rasch Measurement
ERIC Educational Resources Information Center
Prieto, Gerardo; Nieto, Eloísa
2014-01-01
This paper describes how a Many Faceted Rasch Measurement (MFRM) approach can be applied to performance assessment focusing on rater analysis. The article provides an introduction to MFRM, a description of MFRM analysis procedures, and an example to illustrate how to examine the effects of various sources of variability on test takers' performance…
Noninvariant Measurement in Rater-Mediated Assessments of Teaching Quality
ERIC Educational Resources Information Center
Kelcey, Ben
2014-01-01
Valid and reliable measurement of teaching is essential to evaluating and improving teacher effectiveness and advancing large-scale policy-relevant research in education (Raudenbush & Sadoff, 2008). One increasingly common component of teaching evaluations is the direct observation of teachers in their classrooms. Classroom observations have…
ERIC Educational Resources Information Center
Johnson, David; VanBrackle, Lewis
2012-01-01
Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…
Awatani, Takenori; Mori, Seigo; Shinohara, Junji; Koshiba, Hiroya; Nariai, Miki; Tatsumi, Yasutaka; Nagata, Akinori; Morikita, Ikuhiro
2016-03-01
[Purpose] The purpose of present study was to establish the same-session and between-day intra-rater reliability of measurements of extensor strength in the maximum abducted position (MABP) using hand-held dynamometer (HHD). [Subjects] Thirteen healthy volunteers (10 male, 3 female; mean ± SD: age 19.8 ± 0.8 y) participated in the study. [Methods] Participants in the prone position with maximum abduction of shoulder were instructed to hold the contraction against the ground reaction force, and peak isometric force was recorded using the HHD on the floor. Participants performed maximum isometric contractions lasting 3 s, with 3 trials in one session. Between-day measurements were performed in 2 sessions separated by a 1-week interval. Intra-rater reliability was determined using intraclass correlation coefficients (ICC). Systematic errors were assessed using Bland-Altman analysis for between-day data. [Results] ICC values for same-session data and between-day data were found to be "almost perfect". Systematic errors not existed and only random error existed. [Conclusion] The measurement method used in this study can easily control for experimental conditions and allow precise measurement because the lack of stabilization and the impact of tester strength are removed. Thus, extensor strength in MABP measurement is beneficial for muscle strength assessment.
Esser, Patrick; Dawes, Helen; Collett, Johnny; Feltham, Max G; Howells, Ken
2012-03-30
Walking models driven by centre of mass (CoM) data obtained from inertial measurement units (IMU) or optical motion capture systems (OMCS) can be used to objectively measure gait. However current models have only been validated within typical developed adults (TDA). The purpose of this study was to compare the projected CoM movement within Parkinson's disease (PD) measured by an IMU with data collected from an OMCS after which spatio-temporal gait measures were derived using an inverted pendulum model. The inter-rater reliability of spatio-temporal parameters was explored between expert researchers and clinicians using the IMU processed data. Participants walked 10 m with an IMU attached over their centre of mass which was simultaneously recorded by an OMCS. Data was collected on two occasions, each by an expert researcher and clinician. Ten people with PD showed no difference (p=0.13) for vertical, translatory acceleration, velocity and relative position of the projected centre of mass between IMU and OMCS data. Furthermore no difference (p=0.18) was found for the derived step time, stride length and walking speed for people with PD. Measurements of step time (p=0.299), stride length (p=0.883) and walking speed (p=0.751) did not differ between experts and clinicians. There was good inter-rater reliability for these parameters (ICC3.1=0.979, ICC3.1=0.958 and ICC3.1=0.978, respectively). The findings are encouraging and support the use of IMUs by clinicians to measure CoM movement in people with PD.
Surface temperature measurement errors
Keltner, N.R.; Beck, J.V.
1983-05-01
Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.
Statistical fusion of surface labels provided by multiple raters
NASA Astrophysics Data System (ADS)
Bogovic, John A.; Landman, Bennett A.; Bazin, Pierre-Louis; Prince, Jerry L.
2010-03-01
Studies of the size and morphology of anatomical structures rely on accurate and reproducible delineation of the structures, obtained either by human raters or automatic segmentation algorithms. Measures of reproducibility and variability are vital aspects of such studies and are usually estimated using repeated scans or repeated delineations (in the case of human raters). Methods exist for simultaneously estimating the true structure and rater performance parameters from multiple segmentations and have been demonstrated on volumetric images. In this work, we extend the applicability of previous methods onto two-dimensional surfaces parameterized as triangle meshes. Label homogeneity is enforced using a Markov random field formulated with an energy that addresses the challenges introduced by the surface parameterization. The method was tested using both simulated raters and cortical gyral labels. Simulated raters are computed using a global error model as well as a novel and more realistic boundary error model. We study the impact of raters and their accuracy based on both models, and show how effectively this method estimates the true segmentation on simulated surfaces. The Markov random field formulation was shown to effectively enforce homogeneity for raters suffering from label noise. We demonstrated that our method provides substantial improvements in accuracy over single-atlas methods for all experimental conditions.
Participant, Rater, and Computer Measures of Coherence in Posttraumatic Stress Disorder
Rubin, David C.; Deffler, Samantha A.; Ogle, Christin M.; Dowell, Nia M.; Graesser, Arthur C.; Beckham, Jean C.
2015-01-01
We examined the coherence of trauma memories in a trauma-exposed community sample of 30 adults with and 30 without PTSD. The groups had similar categories of traumas and were matched on multiple factors that could affect the coherence of memories. We compared the transcribed oral trauma memories of participants with their most important and most positive memories. A comprehensive set of 28 measures of coherence including 3 ratings by the participants, 7 ratings by outside raters, and 18 computer-scored measures, provided a variety of approaches to defining and measuring coherence. A MANOVA indicated differences in coherence among the trauma, important, and positive memories, but not between the diagnostic groups or their interaction with these memory types. Most differences were small in magnitude; in some cases, the trauma memories were more, rather than less, coherent than the control memories. Where differences existed, the results agreed with the existing literature, suggesting that factors other than the incoherence of trauma memories are most likely to be central to the maintenance of PTSD and thus its treatment. PMID:26523945
Intra- and inter-rater reliability of digital image analysis for skin color measurement
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-01-01
Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208
The Effects of Rater Training on Inter-Rater Agreement
ERIC Educational Resources Information Center
Pufpaff, Lisa A.; Clarke, Laura; Jones, Ruth E.
2015-01-01
This paper addresses the effects of rater training on the rubric-based scoring of three preservice teacher candidate performance assessments. This project sought to evaluate the consistency of ratings assigned to student learning outcome measures being used for program accreditation and to explore the need for rater training in order to increase…
ERIC Educational Resources Information Center
Murphy, Daniel L.; Beretvas, S. Natasha
2015-01-01
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Sršen, Katja Groleger; Vidmar, Gaj; Pikl, Maša; Vrečar, Irena; Burja, Cirila; Krušec, Klavdija
2012-06-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine its content validity and inter-rater reliability. Fifty-four healthy children, 3.5-11 years old, from a mainstream swimming program participated in a content validity study. They were evaluated with SWIM and the national evaluation system of swimming abilities (classifying children into seven categories). To study the inter-rater reliability of SWIM, we included 37 children and youth from a Halliwick swimming program, aged 7-22 years, who were evaluated by two Halliwick instructors independently. The average SWIM score differed between national evaluation system categories and followed the expected order (P<0.001), whereby a ceiling effect was observed in the higher categories. High inter-rater reliability was found for all 11 SWIM items. The lowest reliability was observed for item G (sagittal rotation), although the estimates were still above 0.9. As expected, the highest reliability was observed for the total score (intraclass correlation 0.996). The validity of SWIM with respect to the national evaluation system of swimming abilities is high until the point where a swimmer is well adapted to water and already able to learn some swimming techniques. The inter-rater reliability of SWIM is very high; thus, we believe that SWIM can be used in further research and practice to follow the progress of swimmers.
Measuring Test Measurement Error: A General Approach
ERIC Educational Resources Information Center
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2013-01-01
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-06-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
Improved Error Thresholds for Measurement-Free Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Improved Error Thresholds for Measurement-Free Error Correction.
Crow, Daniel; Joynt, Robert; Saffman, M
2016-09-23
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
ERIC Educational Resources Information Center
Bock, Douglas G.; And Others
1984-01-01
This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
2012-01-01
Background Assessment of range of motion (ROM) and muscle strength is fundamental in the clinical diagnosis of hip osteoarthritis (OA) but reproducibility of these measurements has mostly involved clinicians from secondary care and has rarely reported agreement parameters. Therefore, the primary objective of the study was to determine the inter-rater reproducibility of ROM and muscle strength measurements. Furthermore, the reliability of the overall assessment of clinical hip OA was evaluated. Reporting is in accordance with proposed guidelines for the reporting of reliability and agreement studies (GRRAS). Methods In a university hospital, four blinded raters independently examined patients with unilateral hip OA; two hospital orthopaedists independently examined 48 (24 men) patients and two primary care chiropractors examined 61 patients (29 men). ROM was measured in degrees (deg.) with a standard two-arm goniometer and muscle strength in Newton (N) using a hand-held dynamometer. Reproducibility is reported as agreement and reliability between paired raters of the same profession. Agreement is reported as limits of agreement (LoA) and reliability is reported with intraclass correlation coefficients (ICC). Reliability of the overall assessment of clinical OA is reported as weighted kappa. Results Between orthopaedists, agreement for ROM ranged from LoA [-28–12 deg.] for internal rotation to [-8–13 deg.] for extension. ICC ranged between 0.53 and 0.73, highest for flexion. For muscle strength between orthopaedists, LoA ranged from [-65–47N] for external rotation to [-10 –59N] for flexion. ICC ranged between 0.52 and 0.85, highest for abduction. Between chiropractors, agreement for ROM ranged from LoA [-25–30 deg.] for internal rotation to [-13–21 deg.] for flexion. ICC ranged between 0.14 and 0.79, highest for flexion. For muscle strength between chiropractors, LoA ranged between [-80–20N] for external rotation to [-146–55N] for abduction. ICC
Jalan, Nikita S; Daftari, Sonam S; Retharekar, Seemi S; Rairikar, Savita A; Shyam, Ashok M; Sancheti, Parag K
2015-01-01
BACKGROUND: Measurement of maximum inspiratory pressure is the most prevalent method used in clinical practice to assess the strength of the inspiratory muscles. Although there are many devices available for the assessment of inspiratory muscle strength, there is a dearth of literature describing the reliability of devices that can be used in clinical patient assessment. The capsule-sensing pressure gauge (CSPG-V) is a new tool that measures the strength of inspiratory muscles; it is easy to use, noninvasive, inexpensive and lightweight. OBJECTIVE: To test the intra- and inter-rater reliability of a CSPG-V device in healthy adults. METHODS: A cross-sectional study involving 80 adult subjects with a mean (± SD) age of 22±3 years was performed. Using simple randomization, 40 individuals (20 male, 20 female) were used for intrarater and 40 (20 male, 20 female) were used for inter-rater reliability testing of the CSPG-V device. The subjects performed three inspiratory efforts, which were sustained for at least 3 s; the best of the three readings was used for intra- and inter-rater comparison. The intra- and inter-rater reliability were calculated using intraclass correlation coefficients (ICCs). RESULTS: The intrarater reliability ICC was 0.962 and the inter-rater reliability ICC was 0.922. CONCLUSION: Results of the present study suggest that maximum inspiratory pressure measured using a CSPG-V device has excellent intraand inter-rater reliability, and can be used as a diagnostic and prognostic tool in patients with respiratory muscle impairment. PMID:26089737
ERIC Educational Resources Information Center
Douglas, Scott Roy
2015-01-01
Independent confirmation that vocabulary in use unfolds across levels of performance as expected can contribute to a more complete understanding of validity in standardized English language tests. This study examined the relationship between Lexical Frequency Profiling (LFP) measures and rater judgements of test-takers' overall levels of…
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Tuvblad, Catherine; Bezdjian, Serena; Raine, Adrian; Baker, Laura A.
2014-01-01
No study has yet examined the genetic and environmental influences on psychopathic personality across different raters and method of assessment. Participants were part of a community sample of male and female twins born between 1990 and 1995. The Child Psychopathy Scale (CPS) and the Antisocial Process Screening Device (APSD) were administered to the twins and their parents when the twins were 14 to 15 years old. The Psychopathy Checklist: Youth Version (PCL:YV) was administered and scored by trained testers. Results showed that a one-factor common pathway model was the best fit for the data. Genetic influences explained 69% of the variance in the latent psychopathic personality factor, while non-shared environmental influences explained 31%. Measurement-specific genetic effects accounted for between 9% and 35% of the total variance in each of the measures, except for PCL:YV where all genetic influences were in common with the other measures. Measure-specific non-shared environmental influences were found for all measures, explaining between 17% and 56% of the variance. These findings provide further evidence of the heritability in psychopathic personality among adolescents, although these effects vary across the way in which these traits are measured, in terms of both informant and instrument used. PMID:24796343
Comparison of Models and Indices for Detecting Rater Centrality.
Wolfe, Edward W; Song, Tian
2015-01-01
To date, much of the research concerning rater effects has focused on rater severity/leniency. Consequently, other potentially important rater effects have largely ignored by those conducting operational scoring projects. This simulation study compares four rater centrality indices (rater fit, residual-expected correlations, rater slope, and rater threshold variance) in terms of their Type I and Type II error rates under varying levels of centrality magnitude, centrality pervasiveness, and rating scale construction when each of four latent trait models is fitted to the simulated data (Rasch rating scale and partial credit models and the generalized rating scale and partial credit models). Results indicate that the residual-expected correlation may be most appropriately sensitive to rater centrality under most conditions.
Measurement error in air pollution exposure assessment.
Navidi, W; Lurmann, F
1995-01-01
The exposure of an individual to an air pollutant can be assessed indirectly, with a "microenvironmental" approach, or directly with a personal sampler. Both methods of assessment are subject to measurement error, which can cause considerable bias in estimates of health effects. If the exposure estimates are unbiased and the measurement error is nondifferential, the bias in a linear model can be corrected when the variance of the measurement error is known. Unless the measurement error is quite large, estimates of health effects based on individual exposures appear to be more accurate than those based on ambient levels.
ERIC Educational Resources Information Center
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April
2014-01-01
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Conditional Standard Error of Measurement in Prediction.
ERIC Educational Resources Information Center
Woodruff, David
1990-01-01
A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…
Error margin for antenna gain measurements
NASA Technical Reports Server (NTRS)
Cable, V.
2002-01-01
The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.
Error latency measurements in symbolic architectures
NASA Technical Reports Server (NTRS)
Young, L. T.; Iyer, R. K.
1991-01-01
Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.
ERIC Educational Resources Information Center
Kahraman, Nilufer; Brown, Crystal B.
2015-01-01
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
Awatani, Takenori; Morikita, Ikuhiro; Shinohara, Junji; Mori, Seigo; Nariai, Miki; Tatsumi, Yasutaka; Nagata, Akinori; Koshiba, Hiroya
2016-11-01
[Purpose] The purpose of the present study was to establish the intra- and inter-rater reliability of measurement of extensor strength in the maximum shoulder abducted position and internal rotator strength in the 90° abducted and the 90° external rotated position using a hand-held dynamometer. [Subjects and Methods] Twelve healthy volunteers (12 male; mean ± SD: age 19.0 ± 1.1 years) participated in the study. The examiners were two students who had nonclinical experience with a hand-held dynamometer measurement. The examiners and participants were blinded to measurement results by the recorder. Participants in the prone position were instructed to hold the contraction against the ground reaction force, and peak isometric force was recorded using the hand-held dynamometer on the floor. Reliability was determined using intraclass correlation coefficients. [Results] The intra- and inter-rater reliability data were found to be "almost perfect". [Conclusion] This study investigated intra- and inter-rater reliability and reveald high reliability. Thus, the measurement method used in the present study can evaluate muscle strength by a simple measurement technique.
Prediction with measurement errors in finite populations
Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San
2011-01-01
We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors. PMID:22162621
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
Honing in on the Social Phenotype in Williams Syndrome Using Multiple Measures and Multiple Raters
ERIC Educational Resources Information Center
Klein-Tasman, Bonita P.; Li-Barber, Kirsten T.; Magargee, Erin T.
2011-01-01
The behavioral phenotype of Williams syndrome (WS) is characterized by difficulties with establishment and maintenance of friendships despite high levels of interest in social interaction. Here, parents and teachers rated 84 children with WS ages 4-16 years using two commonly-used measures assessing aspects of social functioning: the Social Skills…
Schless, Simon-Henri; Desloovere, Kaat; Aertbeliën, Erwin; Molenaers, Guy; Huenaerts, Catherine; Bar-On, Lynn
2015-01-01
Aim Despite the impact of spasticity, there is a lack of objective, clinically reliable and valid tools for its assessment. This study aims to evaluate the reliability of various performance- and spasticity-related parameters collected with a manually controlled instrumented spasticity assessment in four lower limb muscles in children with cerebral palsy (CP). Method The lateral gastrocnemius, medial hamstrings, rectus femoris and hip adductors of 12 children with spastic CP (12.8 years, ±4.13 years, bilateral/unilateral involvement n=7/5) were passively stretched in the sagittal plane at incremental velocities. Muscle activity, joint motion, and torque were synchronously recorded using electromyography, inertial sensors, and a force/torque load-cell. Reliability was assessed on three levels: (1) intra- and (2) inter-rater within session, and (3) intra-rater between session. Results Parameters were found to be reliable in all three analyses, with 90% containing intra-class correlation coefficients >0.6, and 70% of standard error of measurement values <20% of the mean values. The most reliable analysis was intra-rater within session, followed by intra-rater between session, and then inter-rater within session. The Adds evaluation had a slightly lower level of reliability than that of the other muscles. Conclusions Limited intrinsic/extrinsic errors were introduced by repeated stretch repetitions. The parameters were more reliable when the same rater, rather than different raters performed the evaluation. Standardisation and training should be further improved to reduce extrinsic error when different raters perform the measurement. Errors were also muscle specific, or related to the measurement set-up. They need to be accounted for, in particular when assessing pre-post interventions or longitudinal follow-up. The parameters of the instrumented spasticity assessment demonstrate a wide range of applications for both research and clinical environments in the
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Connors, Brenda L.; Rende, Richard; Colton, Timothy J.
2014-01-01
The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns. PMID:24999336
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
Errors of measurement by laser goniometer
NASA Astrophysics Data System (ADS)
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
Measurement process error determination and control
Everhart, J.
1992-01-01
Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960's by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.
Measurement process error determination and control
Everhart, J.
1992-11-01
Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960`s by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
ERIC Educational Resources Information Center
Schuster, Christof
2004-01-01
This article presents a formula for weighted kappa in terms of rater means, rater variances, and the rater covariance that is particularly helpful in emphasizing that weighted kappa is an absolute agreement measure in the sense that it is sensitive to differences in rater's marginal distributions. Specifically, rater mean differences will decrease…
Neutron multiplication error in TRU waste measurements
Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.
Algorithmic Error Correction of Impedance Measuring Sensors
Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira
2009-01-01
This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Application of Uniform Measurement Error Distribution
2016-03-18
should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of...Uniform Measurement Error Distribution 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Ghazarians, Alan; Jackson, Dennis...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME
Body shape preferences: associations with rater body shape and sociosexuality.
Price, Michael E; Pound, Nicholas; Dunn, James; Hopkins, Sian; Kang, Jinsheng
2013-01-01
There is accumulating evidence of condition-dependent mate choice in many species, that is, individual preferences varying in strength according to the condition of the chooser. In humans, for example, people with more attractive faces/bodies, and who are higher in sociosexuality, exhibit stronger preferences for attractive traits in opposite-sex faces/bodies. However, previous studies have tended to use only relatively simple, isolated measures of rater attractiveness. Here we use 3D body scanning technology to examine associations between strength of rater preferences for attractive traits in opposite-sex bodies, and raters' body shape, self-perceived attractiveness, and sociosexuality. For 118 raters and 80 stimuli models, we used a 3D scanner to extract body measurements associated with attractiveness (male waist-chest ratio [WCR], female waist-hip ratio [WHR], and volume-height index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality. As expected, WHR and VHI were important predictors of female body attractiveness, while WCR and VHI were important predictors of male body attractiveness. Results indicated that male rater sociosexuality scores were positively associated with strength of preference for attractive (low) VHI and attractive (low) WHR in female bodies. Moreover, male rater self-perceived attractiveness was positively associated with strength of preference for low VHI in female bodies. The only evidence of condition-dependent preferences in females was a positive association between attractive VHI in female raters and preferences for attractive (low) WCR in male bodies. No other significant associations were observed in either sex between aspects of rater body shape and strength of preferences for attractive opposite-sex body traits. These results suggest that among male raters, rater self-perceived attractiveness and sociosexuality are important predictors of preference strength for attractive opposite
Sonic Anemometer Vertical Wind Speed Measurement Errors
NASA Astrophysics Data System (ADS)
Kochendorfer, J.; Horst, T. W.; Frank, J. M.; Massman, W. J.; Meyers, T. P.
2014-12-01
In eddy covariance studies, errors in the measured vertical wind speed cause errors of a similar magnitude in the vertical fluxes of energy and mass. Several recent studies on the accuracy of sonic anemometer measurements indicate that non-orthogonal sonic anemometers used in eddy covariance studies underestimate the vertical wind speed. It has been suggested that this underestimation is caused by flow distortion from the interference of the structure of the anemometer itself on the flow. When oriented ideally with respect to the horizontal wind direction, orthogonal sonic anemometers that measure the vertical wind speed with a single vertically-oriented acoustic path may measure the vertical wind speed more accurately in typical surface-layer conditions. For non-orthogonal sonic anemometers, Horst et al. (2014) proposed that transducer shadowing may be a dominant factor in sonic flow distortion. As the ratio of sonic transducer diameter to path length and the zenith angle of the three transducer paths decrease, the effects of transducer shadowing on measurements of vertical velocity will decrease. An overview of this research and some of the methods available to correct historical data will be presented.
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Errors Associated With Measurements from Imaging Probes
NASA Astrophysics Data System (ADS)
Heymsfield, A.; Bansemer, A.
2015-12-01
Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.
BAHRAMI, Fariba; NOORIZADEH DEHKORDI, Shohreh; DADGOO, Mehdi
2017-01-01
Objective We aimed to investigation the intra-rater and inter-raters reliability of the 10 meter walk test (10 MWT) in adults with spastic cerebral palsy (CP). Materials & Methods Thirty ambulatory adults with spastic CP in the summer of 2014 participated (19 men, 11 women; mean age 28 ± 7 yr, range 18- 46 yr). Individuals were non-randomly selected by convenient sampling from the Ra’ad Rehabilitation Goodwill Complex in Tehran, Iran. They had GMFCS levels below IV (I, II, and III). Retest interval for inter-raters study lasted a week. During the tests, participants walked with their maximum speed. Intraclass correlation coefficients (ICC) estimated reliability. Results The 10 MWT ICC for intra-rater was 0.98 (95% confidence interval (CI) 0.96-0.99) for participants, and >0.89 in GMFCS subgroups (95% confidence interval (CI) lower bound>0.67). The 10 MWT inter-raters’ ICC was 0.998 (95% confidence interval (CI) 0/996-0/999), and >0.993 in GMFCS subgroups (95% confidence interval (CI) lower bound>0.977). Standard error of the measurement (SEM) values for both studies was small (0.02< SEM< 0.07). Conclusion Excellent intra-rater and inter-raters reliability of the 10 MWT in adults with CP, especially in the moderate motor impairments (GMFCS level III), indicates that this tool can be used in clinics to assess the results of interventions. PMID:28277557
Detection system for ocular refractive error measurement.
Ventura, L; de Faria e Sousa, S J; de Castro, J C
1998-05-01
An automatic and objective system for measuring ocular refractive errors (myopia, hyperopia and astigmatism) was developed. The system consists of projecting a light target (a ring), using a diode laser (lambda = 850 nm), at the fundus of the patient's eye. The light beams scattered from the retina are submitted to an optical system and are analysed with regard to their vergence by a CCD detector (matrix). This system uses the same basic principle for the projection of beams into the tested eye as some commercial refractors, but it is innovative regarding the ring-shaped measuring target for the projection system and the detection system where a matrix detector provides a wider range of measurement and a less complex system for the optical alignment. Also a dedicated electronic circuit was not necessary for treating the electronic signals from the detector (as the usual refractors do); instead a commercial frame grabber was used and software based on the heuristic search technique was developed. All the guiding equations that describe the system as well as the image processing procedure are presented in detail. Measurements in model eyes and in human eyes are in good agreement with retinoscopic measurements and they are also as precise as these kinds of measurements require (0.125D and 5 degrees).
[Therapeutic errors and dose measuring devices].
García-Tornel, S; Torrent, M L; Sentís, J; Estella, G; Estruch, M A
1982-06-01
In order to investigate the possibilities of therapeutical error in syrups administration, authors have measured the capacity of 158 home spoons (x +/- SD). They classified spoons in four groups: group I (table spoons), 49 units (11.65 +/- 2.10 cc); group II (tea spoons), 41 units (4.70+/-1.04 cc); group III (coffee spoons), 41 units (2.60 +/- 0.59 cc), and group IV (miscellaneous), 27 units. They have compared the first three groups with theoreticals values of 15, 5 and 2.5 cc, respectively, ensuring, in the first group, significant statistical differences. In this way, they analyzed information that paediatricians receive from "vademecums", which they usually consult and have studied two points: If syrup has a meter or not, and if it indicates drug concentration or not. Only a 18% of the syrups have a meter and about 88% of the drugs indicate their concentration (mg/cc). They conclude that to prevent errors of dosage, the pharmacological industry must include meters in their products. If they haven't the safest thing is to use syringes.
Measures of Linguistic Accuracy in Second Language Writing Research.
ERIC Educational Resources Information Center
Polio, Charlene G.
1997-01-01
Investigates the reliability of measures of linguistic accuracy in second language writing. The study uses a holistic scale, error-free T-units, and an error classification system on the essays of English-as-a-Second-Language students and discusses why disagreements arise within a rater and between raters. (24 references) (Author/CK)
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Twins and the Study of Rater (Dis)agreement
ERIC Educational Resources Information Center
Bartels, Meike; Boomsma, Dorret I.; Hudziak, James J.; van Beijsterveldt, Toos C. E. M.; van den Oord, Edwin J. C. G.
2007-01-01
Genetically informative data can be used to address fundamental questions concerning the measurement of behavior in children. The authors illustrate this with longitudinal multiple-rater data on internalizing problems in twins. Valid information on the behavior of a child is obtained for behavior that multiple raters agree upon and for…
A Comparison of Assessment Methods and Raters in Product Creativity
ERIC Educational Resources Information Center
Lu, Chia-Chen; Luh, Ding-Bang
2012-01-01
Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…
Kreiter, Clarence D.; Wilson, Adam B.; Humbert, Aloysius J.; Wade, Patricia A.
2016-01-01
Background When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. Method During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. Results The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. Conclusions Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptor's ratings be used to calculate the student's overall mean performance score. PMID:26925540
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Observer error in blood pressure measurement.
Neufeld, P D; Johnson, D L
1986-01-01
This paper describes an experiment undertaken to determine observer error in measuring blood pressure by the auscultatory method. A microcomputer was used to display a simulated mercury manometer and play back tape-recorded Korotkoff sounds synchronized with the fall of the mercury column. Each observer's readings were entered into the computer, which displayed a histogram of all readings taken up to that point and thus showed the variation among observers. The procedure, which could easily be adapted for use in teaching, was used to test 311 observers drawn from physicians, nurses, medical students, nursing students and others at nine health care institutions in Ottawa. The results showed a strong bias for even-digit readings and standard deviations of roughly 5 to 6 mm Hg. The standard deviation for the systolic readings was somewhat smaller for the physicians as a group than for the nurses (3.5 v. 5.9 mm Hg). However, the standard deviations for the diastolic readings were roughly equal for these two groups (approximately 5.5 mm Hg). Images Fig. 1 PMID:3756693
Monitoring the Random Errors of Nuclear Material Measurements
,
1980-06-01
Monitoring and controlling random errors is an important function of a measurement control program. This report describes the principal sources of random error in the common nuclear material measurement processes and the most important elements of a program for monitoring, evaluating and controlling the random error standard deviations of these processes.
Measuring errors and adverse events in health care.
Thomas, Eric J; Petersen, Laura A
2003-01-01
In this paper, we identify 8 methods used to measure errors and adverse events in health care and discuss their strengths and weaknesses. We focus on the reliability and validity of each, as well as the ability to detect latent errors (or system errors) versus active errors and adverse events. We propose a general framework to help health care providers, researchers, and administrators choose the most appropriate methods to meet their patient safety measurement goals.
Rapid mapping of volumetric machine errors using distance measurements
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.
CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.
2004-07-05
The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Reverse attenuation in interaction terms due to covariate measurement error.
Muff, Stefanie; Keller, Lukas F
2015-11-01
Covariate measurement error may cause biases in parameters of regression coefficients in generalized linear models. The influence of measurement error on interaction parameters has, however, only rarely been investigated in depth, and if so, attenuation effects were reported. In this paper, we show that also reverse attenuation of interaction effects may emerge, namely when heteroscedastic measurement error or sampling variances of a mismeasured covariate are present, which are not unrealistic scenarios in practice. Theoretical findings are illustrated with simulations. A Bayesian approach employing integrated nested Laplace approximations is suggested to model the heteroscedastic measurement error and covariate variances, and an application shows that the method is able to reveal approximately correct parameter estimates.
ERIC Educational Resources Information Center
Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija
2012-01-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…
Error analysis for a laser differential confocal radius measurement system.
Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu
2015-02-10
In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13 μm+0.9 ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.
Thinking Scientifically: Understanding Measurement and Errors
ERIC Educational Resources Information Center
Alagumalai, Sivakumar
2015-01-01
Thinking scientifically consists of systematic observation, experiment, measurement, and the testing and modification of research questions. In effect, science is about measurement and the understanding of causation. Measurement is an integral part of science and engineering, and has pertinent implications for the human sciences. No measurement is…
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Deconvolution Estimation in Measurement Error Models: The R Package decon
Wang, Xiao-Feng; Wang, Bin
2011-01-01
Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
The Impact of Covariate Measurement Error on Risk Prediction
Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna
2015-01-01
In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses’ Health Study. PMID:25865315
Temperature error in radiation thermometry caused by emissivity and reflectance measurement error.
Corwin, R R; Rodenburghii, A
1994-04-01
A general expression for the temperature error caused by emissivity uncertainty is developed, and it is concluded that lower-wavelength systems provide significantly less temperature error. A technique to measure the normal emissivity is proposed that uses a normally incident light beam and an aperture to collect a portion of the energy reflected from the surface and to measure essentially both the specular component and the biangular reflectance at the edge of the aperture. The theoretical results show that the aperture size need not be substantial to provide reasonably low temperature errors for a broad class of materials and surface reflectance conditions.
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Does a Rater's Professional Background Influence Communication Skills Assessment?
Artemiou, Elpida; Hecker, Kent G; Adams, Cindy L; Coe, Jason B
2015-01-01
There is increasing pressure in veterinary education to teach and assess communication skills, with the Objective Structured Clinical Examination (OSCE) being the most common assessment method. Previous research reveals that raters are a large source of variance in OSCEs. This study focused on examining the effect of raters' professional background as a source of variance when assessing students' communication skills. Twenty-three raters were categorized according to their professional background: clinical sciences (n=11), basic sciences (n=4), clinical communication (n=5), or hospital administrator/clinical skills technicians (n=3). Raters from each professional background were assigned to the same station and assessed the same students during two four-station OSCEs. Students were in year 2 of their pre-clinical program. Repeated-measures ANOVA results showed that OSCE scores awarded by the rater groups differed significantly: (F(matched_station_1) [2,91]=6.97, p=.002), (F(matched_station_2) [3,90]=13.95, p=.001), (F(matched_station_3) [3,90]=8.76, p=.001), and ((Fmatched_station_4) [2,91]=30.60, p=.001). A significant time effect between the two OSCEs was calculated for matched stations 1, 2, and 4, indicating improved student performances. Raters with a clinical communication skills background assigned scores that were significantly lower compared to the other rater groups. Analysis of written feedback provided by the clinical sciences raters showed that they were influenced by the students' clinical knowledge of the case and that they did not rely solely on the communication checklist items. This study shows that it is important to consider rater background both in recruitment and training programs for communication skills' assessment.
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Laser Doppler anemometer measurements using nonorthogonal velocity components - Error estimates
NASA Technical Reports Server (NTRS)
Orloff, K. L.; Snyder, P. K.
1982-01-01
Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.
Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.
Orloff, K L; Snyder, P K
1982-01-15
Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.
Hypothesis testing in an errors-in-variables model with heteroscedastic measurement errors.
de Castro, Mário; Galea, Manuel; Bolfarine, Heleno
2008-11-10
In many epidemiological studies it is common to resort to regression models relating incidence of a disease and its risk factors. The main goal of this paper is to consider inference on such models with error-prone observations and variances of the measurement errors changing across observations. We suppose that the observations follow a bivariate normal distribution and the measurement errors are normally distributed. Aggregate data allow the estimation of the error variances. Maximum likelihood estimates are computed numerically via the EM algorithm. Consistent estimation of the asymptotic variance of the maximum likelihood estimators is also discussed. Test statistics are proposed for testing hypotheses of interest. Further, we implement a simple graphical device that enables an assessment of the model's goodness of fit. Results of simulations concerning the properties of the test statistics are reported. The approach is illustrated with data from the WHO MONICA Project on cardiovascular disease.
Non-Gaussian Error Distributions of LMC Distance Moduli Measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Ratra, Bharat
2015-12-01
We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Virtual Raters for Reproducible and Objective Assessments in Radiology
NASA Astrophysics Data System (ADS)
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-04-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.
Virtual Raters for Reproducible and Objective Assessments in Radiology
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-01-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics. PMID:27118379
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Space acceleration measurement system triaxial sensor head error budget
NASA Technical Reports Server (NTRS)
Thomas, John E.; Peters, Rex B.; Finley, Brian D.
1992-01-01
The objective of the Space Acceleration Measurement System (SAMS) is to measure and record the microgravity environment for a given experiment aboard the Space Shuttle. To accomplish this, SAMS uses remote triaxial sensor heads (TSH) that can be mounted directly on or near an experiment. The errors of the TSH are reduced by calibrating it before and after each flight. The associated error budget for the calibration procedure is discussed here.
Identification and Minimization of Errors in Doppler Global Velocimetry Measurements
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
2000-01-01
A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior
Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor
2015-01-01
Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.
Errors Associated with the Direct Measurement of Radionuclides in Wounds
Hickman, D P
2006-03-02
Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and
Filter induced errors in laser anemometer measurements using counter processors
NASA Technical Reports Server (NTRS)
Oberle, L. G.; Seasholtz, R. G.
1985-01-01
Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.
Efficient measurement of quantum gate error by interleaved randomized benchmarking.
Magesan, Easwar; Gambetta, Jay M; Johnson, B R; Ryan, Colm A; Chow, Jerry M; Merkel, Seth T; da Silva, Marcus P; Keefe, George A; Rothwell, Mary B; Ohki, Thomas A; Ketchen, Mark B; Steffen, M
2012-08-24
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates X(π/2) and Y(π/2). These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
NASA Astrophysics Data System (ADS)
Kowalik, Waldemar W.; Garncarz, Beata E.; Kasprzak, Henryk T.
This work contains results of computer simulation researches, which define requirements for measurement conditions, which should be fulfilled so that measurement results ensure allowable errors. They define: allowable measurement errors (interferogram's scanning) and conditions, which should fulfill computer programs, so that errors introduced by mathematical operations and computer are the smallest.
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Stronger error disturbance relations for incompatible quantum measurements
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Chiranjib; Shukla, Namrata; Pati, Arun Kumar
2016-03-01
We formulate a new error-disturbance relation, which is free from explicit dependence upon variances in observables. This error-disturbance relation shows improvement over the one provided by the Branciard inequality and the Ozawa inequality for some initial states and for a particular class of joint measurements under consideration. We also prove a modified form of Ozawa's error-disturbance relation. The latter relation provides a tighter bound compared to the Ozawa and the Branciard inequalities for a small number of states.
Beam induced vacuum measurement error in BEPC II
NASA Astrophysics Data System (ADS)
Huang, Tao; Xiao, Qiong; Peng, XiaoHua; Wang, HaiJing
2011-12-01
When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.
Inter-rater reliability of select physical examination procedures in patients with neck pain.
Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A
2014-07-01
This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.
Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements
Majewski, M.S.
1997-01-01
Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.
Selected error sources in resistance measurements on superconductors
NASA Astrophysics Data System (ADS)
García-Vázquez, Valentín; Pérez-Amaro, Neftalí; Canizo-Cabrera, A.; Cumplido-Espíndola, B.; Martínez-Hernández, R.; Abarca-Ramírez, M. A.
2001-08-01
In order to investigate the causes that produce some of the unwanted effects observed in the resistance versus temperature profiles, a variety of sources of error for resistance measurements in superconductors, using a standard four-probe configuration, have been studied. A piece of superconducting Y1Ba2Cu3O7-x ceramic material has been used as the test sample, and the resulting effects in both accuracy and precision in its temperature dependent resistance are reported here. Studied measurement error sources include thermal emf's, temperature sweep rates, Faraday currents, electrical-contact failures at the sample's surface, thermal contractions at mechanically attached instrumental wires, external electromagnetic fields, and slow sampling rates during data acquisition. Details of the experimental setup and its measurement error function are also given.
Spatial regression with covariate measurement error: A semiparametric approach.
Huque, Md Hamidul; Bondell, Howard D; Carroll, Raymond J; Ryan, Louise M
2016-09-01
Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modeling because of the presence of spatial correlation among the observations. We propose a semiparametric regression approach to obtain bias-corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice.
ERIC Educational Resources Information Center
Leckie, George; Baird, Jo-Anne
2011-01-01
This study examined rater effects on essay scoring in an operational monitoring system from England's 2008 national curriculum English writing test for 14-year-olds. We fitted two multilevel models and analyzed: (1) drift in rater severity effects over time; (2) rater central tendency effects; and (3) differences in rater severity and central…
Poulos, Natalie S.; Pasch, Keryn E.
2015-01-01
Few studies of the food environment have collected primary data, and even fewer have reported reliability of the tool used. This study focused on the development of an innovative electronic data collection tool used to document outdoor food and beverage (FB) advertising and establishments near 43 middle and high schools in the Outdoor MEDIA Study. Tool development used GIS based mapping, an electronic data collection form on handheld devices, and an easily adaptable interface to efficiently collect primary data within the food environment. For the reliability study, two teams of data collectors documented all FB advertising and establishments within one half-mile of six middle schools. Inter-rater reliability was calculated overall and by advertisement or establishment category using percent agreement. A total of 824 advertisements (n=233), establishment advertisements (n=499), and establishments (n=92) were documented (range=8–229 per school). Overall inter-rater reliability of the developed tool ranged from 69–89% for advertisements and establishments. Results suggest that the developed tool is highly reliable and effective for documenting the outdoor FB environment. PMID:26022774
Poulos, Natalie S; Pasch, Keryn E
2015-07-01
Few studies of the food environment have collected primary data, and even fewer have reported reliability of the tool used. This study focused on the development of an innovative electronic data collection tool used to document outdoor food and beverage (FB) advertising and establishments near 43 middle and high schools in the Outdoor MEDIA Study. Tool development used GIS based mapping, an electronic data collection form on handheld devices, and an easily adaptable interface to efficiently collect primary data within the food environment. For the reliability study, two teams of data collectors documented all FB advertising and establishments within one half-mile of six middle schools. Inter-rater reliability was calculated overall and by advertisement or establishment category using percent agreement. A total of 824 advertisements (n=233), establishment advertisements (n=499), and establishments (n=92) were documented (range=8-229 per school). Overall inter-rater reliability of the developed tool ranged from 69-89% for advertisements and establishments. Results suggest that the developed tool is highly reliable and effective for documenting the outdoor FB environment.
How Do Raters Judge Spoken Vocabulary?
ERIC Educational Resources Information Center
Li, Hui
2016-01-01
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
Analysis and improvement of gas turbine blade temperature measurement error
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui
2015-10-01
Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.
Error-disturbance uncertainty relations in neutron spin measurements
NASA Astrophysics Data System (ADS)
Sponar, Stephan
2016-05-01
Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.
Error compensation research on the focal plane attitude measurement instrument
NASA Astrophysics Data System (ADS)
Zhou, Hongfei; Zhang, Feifan; Zhai, Chao; Zhou, Zengxiang; Liu, Zhigang; Wang, Jianping
2016-07-01
The surface accuracy of astronomical telescope focal plate is a key indicator to precision stellar observation. Combined with the six DOF parallel focal plane attitude measurement instrument that had been already designed, space attitude error compensation of the attitude measurement instrument for the focal plane was studied in order to measure the deformation and surface shape of the focal plane in different space attitude accurately.
Kunz, Michael
2015-01-01
In this paper, three analysis procedures for repeated correlated binary data with no a priori ordering of the measurements are described and subsequently investigated. Examples for correlated binary data could be the binary assessments of subjects obtained by several raters in the framework of a clinical trial. This topic is especially of relevance when success criteria have to be defined for dedicated imaging trials involving several raters conducted for regulatory purposes. First, an analytical result on the expectation of the 'Majority rater' is presented when only the marginal distributions of the single raters are given. The paper provides a simulation study where all three analysis procedures are compared for a particular setting. It turns out that in many cases, 'Average rater' is associated with a gain in power. Settings were identified where 'Majority significant' has favorable properties. 'Majority rater' is in many cases difficult to interpret.
Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware
NASA Technical Reports Server (NTRS)
Winnitoy, Susan
2012-01-01
measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.
Non-Gaussian error distribution of 7Li abundance measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Houston, Stephen; Ratra, Bharat
2015-07-01
We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.
The effect of measurement error on surveillance metrics
Weaver, Brian Phillip; Hamada, Michael S.
2012-04-24
The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Three Approximations of Standard Error of Measurement: An Empirical Approach.
ERIC Educational Resources Information Center
Garvin, Alfred D.
Three successively simpler formulas for approximating the standard error of measurement were derived by applying successively more simplifying assumptions to the standard formula based on the standard deviation and the Kuder-Richardson formula 20 estimate of reliability. The accuracy of each of these three formulas, with respect to the standard…
Explaining sexual harassment judgments: looking beyond gender of the rater.
O'Connor, Maureen; Gutek, Barbara A; Stockdale, Margaret; Geer, Tracey M; Melançon, Renée
2004-02-01
In two decades of research on sexual harassment, one finding that appears repeatedly is that gender of the rater influences judgments about sexual harassment such that women are more likely than men to label behavior as sexual harassment. Yet, sexual harassment judgments are complex, particularly in situations that culminate in legal proceedings. And, this one variable, gender, may have been overemphasized to the exclusion of other situational and rater characteristic variables. Moreover, why do gender differences appear? As work by Wiener and his colleagues have done (R. L. Wiener et al., 2002; R. L. Wiener & L. Hurt, 2000; R. L. Wiener, L. Hurt, B. Russell, K. Mannen, & C. Gasper, 1997), this study attempts to look beyond gender to answer this question. In the studies reported here, raters (undergraduates and community adults), either read a written scenario or viewed a videotaped reenactment of a sexual harassment trial. The nature of the work environment was manipulated to see what, if any, effect the context would have on gender effects. Additionally, a number of rater characteristics beyond gender were measured, including ambivalent sexism attitudes of the raters, their judgments of complainant credibility, and self-referencing that might help explain rater judgments. Respondent gender, work environment, and community vs. student sample differences produced reliable differences in sexual harassment ratings in both the written and video trial versions of the study. The gender and sample differences in the sexual harassment ratings, however, are explained by a model which incorporates hostile sexism, perceptions of the complainants credibility, and raters' own ability to put themselves in the complainant's position (self-referencing).
Comparing measurement errors for formants in synthetic and natural vowelsa)
Shadle, Christine H.; Nam, Hosung; Whalen, D. H.
2016-01-01
The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295–1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555
Error Correction for Foot Clearance in Real-Time Measurement
NASA Astrophysics Data System (ADS)
Wahab, Y.; Bakar, N. A.; Mazalan, M.
2014-04-01
Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Inter-tester Agreement in Refractive Error Measurements
Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang
2014-01-01
Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter
The Role of Measurement Error in Familiar Statistics
2006-06-01
0rga niz ational =’" Researh Methods ,Volume 9 Number 1 lanuary 2006 99-112The Role of Measurement Error 2006 SagePublications 10.1177...inextricably. Educational and Psychological Measurement, 62, 254-263. Carretta, T. R. (1997). Group differences on U.S. Air Force pilot selection...analysis of the statistical and ethical implications of various defi- nitions of "test bias." Psychological Bulletin, 83, 1053-107 1. Hunter, J. E
Fairus, Fariza Zainudin; Joseph, Leonard Henry; Omar, Baharudin; Ahmad, Johan; Sulaiman, Riza
2016-01-01
Background The understanding of vertical ground reaction force (VGRF) during walking and half-squatting is necessary and commonly utilised during the rehabilitation period. The purpose of this study was to establish measurement reproducibility of VGRF that reports the minimal detectable changes (MDC) during walking and half-squatting activity among healthy male adults. Methods 14 male adults of average age, 24.88 (5.24) years old, were enlisted in this study. The VGRF was assessed using the force plates which were embedded into a customised walking platform. Participants were required to carry out three trials of gait and half-squat. Each participant completed the two measurements within a day, approximately four hours apart. Results Measurements of VGRF between sessions presented an excellent VGRF data for walking (ICC Left = 0.88, ICC Right = 0.89). High reliability of VGRF was also noted during the half-squat activity (ICC Left = 0.95, ICC Right = 0.90). The standard errors of measurement (SEM) of VGRF during the walking and half-squat activity are less than 8.35 Nm/kg and 4.67 Nm/kg for the gait and half-squat task respectively. Conclusion The equipment set-up and measurement procedure used to quantify VGRF during walking and half-squatting among healthy males displayed excellent reliability. Researcher should consider using this method to measure the VGRF during functional performance assessment. PMID:27547111
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Error analysis for NMR polymer microstructure measurement without calibration standards.
Qiu, XiaoHua; Zhou, Zhe; Gobbi, Gian; Redwine, Oscar D
2009-10-15
We report an error analysis method for primary analytical methods in the absence of calibration standards. Quantitative (13)C NMR analysis of ethylene/1-octene (E/O) copolymers is given as an example. Because the method is based on a self-calibration scheme established by counting, it is a measure of accuracy rather than precision. We demonstrate it is self-consistent and neither underestimate nor excessively overestimate the experimental errors. We also show the method identified previously unknown systematic biases in a NMR instrument. The method can eliminate unnecessary data averaging to save valuable NMR resources. The accuracy estimate proposed is not unique to (13)C NMR spectroscopy of E/O but should be applicable to all other measurement systems where the accuracy of a subset of the measured responses can be established.
Confounding and exposure measurement error in air pollution epidemiology.
Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert
2012-06-01
Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.
Reducing Errors by Use of Redundancy in Gravity Measurements
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Holm, Inger; Tveter, Anne Therese; Aulie, Vibeke Smith; Stuge, Britt
2013-02-01
The aim of the present study was to evaluate the intra- and inter-tester reliability of the movement assessment battery for children-second edition (MABC-2), ageband 2. We wanted to analyze the collected data, with adequate statistical methods, to provide relevant recommendations for physical therapists who are interpreting changes in the context of daily clinical practice. Forty-five healthy children, 23 girls and 22 boys with a mean age of 8.7±0.7 years, participated in the study, the inter-tester procedures were performed the same day and the intra-tester procedures within a one to two week interval. The statistical methods used were intra-class correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable change (SDC). The children had no failed items during the tests. The ICC values ranged from 0.23 to 0.76. The items "treading lace" and "one-board balance" showed the highest measurement errors both for the intra- and inter-rater reliability. The SDC(90%) values were 9.7 and 18.5 for the intra- and inter-rater reliability, respectively. The present study showed high intra- and inter-rater chance variation MABC-2, ageband 2. A change of more than ±9.7 and ±18.5 on the total test score (TTS) should be required to state (with a 90% confidence) that a real change in a single individual has occurred, for intra- and inter-rater testing, respectively. These findings may indicate that the MABC-2 might be more suitable for diagnostic or clinical decision making purposes, than for evaluation of change over time.
NASA Astrophysics Data System (ADS)
Liu, Wenwen; Tao, Tingting; Zeng, Hao
2016-10-01
Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.
Error in total ozone measurements arising from aerosol attenuation
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
Efficient measurement error correction with spatially misaligned data
Szpiro, Adam A.; Sheppard, Lianne; Lumley, Thomas
2011-01-01
Association studies in environmental statistics often involve exposure and outcome data that are misaligned in space. A common strategy is to employ a spatial model such as universal kriging to predict exposures at locations with outcome data and then estimate a regression parameter of interest using the predicted exposures. This results in measurement error because the predicted exposures do not correspond exactly to the true values. We characterize the measurement error by decomposing it into Berkson-like and classical-like components. One correction approach is the parametric bootstrap, which is effective but computationally intensive since it requires solving a nonlinear optimization problem for the exposure model parameters in each bootstrap sample. We propose a less computationally intensive alternative termed the “parameter bootstrap” that only requires solving one nonlinear optimization problem, and we also compare bootstrap methods to other recently proposed methods. We illustrate our methodology in simulations and with publicly available data from the Environmental Protection Agency. PMID:21252080
Detecting correlated errors in state-preparation-and-measurement tomography
NASA Astrophysics Data System (ADS)
Jackson, Christopher; van Enk, S. J.
2015-10-01
Whereas in standard quantum-state tomography one estimates an unknown state by performing various measurements with known devices, and whereas in detector tomography one estimates the positive-operator-valued-measurement elements of a measurement device by subjecting to it various known states, we consider here the case of SPAM (state preparation and measurement) tomography where neither the states nor the measurement device are assumed known. For d -dimensional systems measured by d -outcome detectors, we find there are at most d2(d2-1 ) "gauge" parameters that can never be determined by any such experiment, irrespective of the number of unknown states and unknown devices. For the case d =2 we find gauge-invariant quantities that can be accessed directly experimentally and that can be used to detect and describe SPAM errors. In particular, we identify conditions whose violations detect the presence of correlations between SPAM errors. From the perspective of SPAM tomography, standard quantum-state tomography and detector tomography are protocols that fix the gauge parameters through the assumption that some set of fiducial measurements is known or that some set of fiducial states is known, respectively.
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
Effects of vibration measurement error on remote sensing image restoration
NASA Astrophysics Data System (ADS)
Sun, Xuan; Wei, Zhang; Zhi, Xiyang
2016-10-01
Satellite vibrations would lead to image motion blur. Since the vibration isolators cannot fully suppress the influence of vibrations, image restoration methods are usually adopted, and the vibration characteristics of imaging system are usually required as algorithm inputs for better restoration results, making the vibration measurement error strongly connected to the final outcome. If the measurement error surpass a certain range, the restoration may not be implemented successfully. Therefore it is important to test the applicable scope of restoration algorithms and control the vibrations within the range, on the other hand, if the algorithm is robust, then the requirements for both vibration isolator and vibration detector can be lowered and thus less financial cost is needed. In this paper, vibration-induced degradation is first analyzed, based on which the effects of measurement error on image restoration are further analyzed. The vibration-induced degradation is simulated using high resolution satellite images and then the applicable working condition of typical restoration algorithms are tested with simulation experiments accordingly. The research carried out in this paper provides a valuable reference for future satellite design which plan to implement restoration algorithms.
Toomey, Elaine; Coote, Susan
2013-01-01
This study investigated the between-rater reliability of the Berg Balance Scale (BBS), 6-Minute Walk test (6MW), and handheld dynamometry (HHD) in people with multiple sclerosis (MS). Previous studies that examined BBS and 6MW reliability in people with MS have not used more than two raters, or analyzed different mobility levels separately. The reliability of HHD has not been previously reported for people with MS. In this study, five physical therapists assessed eight people with MS using the BBS, 6MW, and HHD, resulting in 12 pairs of data. Data were analyzed using intraclass correlation coefficients (ICCs), Spearman correlation coefficients (SCCs), and Bland and Altman methods. The results suggest excellent agreement for the BBS (SCC = 0.95, mean difference between raters [d̄] = 2.08, standard error of measurement [SEM] = 1.77) and 6MW (ICC = 0.98, d̄ = 5.22 m, SEM = 24.76 m) when all mobility levels are analyzed together. Reliability is lower in less mobile people with MS (BBS SCC = 0.6, d̄ = -1.83; 6MW ICC = 0.95, d̄ = 20.04 m). Although the ICC and SCC results for HHD suggest good-to-excellent reliability (0.65-0.85), d̄ ranges up to 17.83 N, with SEM values as high as 40.95 N. While the small sample size is a limitation of this study, the preliminary evidence suggests strong agreement between raters for the BBS and 6MW and decreased agreement between raters for people with greater mobility problems. The mean differences between raters for HHD are probably too high for it to be applied in clinical practice.
Motion measurement errors and autofocus in bistatic SAR.
Rigling, Brian D; Moses, Randolph L
2006-04-01
This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus.
Error reduction techniques for measuring long synchrotron mirrors
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
More systematic errors in the measurement of power spectral density
NASA Astrophysics Data System (ADS)
Mack, Chris A.
2015-07-01
Power spectral density (PSD) analysis is an important part of understanding line-edge and linewidth roughness in lithography. But uncertainty in the measured PSD, both random and systematic, complicates interpretation. It is essential to understand and quantify the sources of the measured PSD's uncertainty and to develop mitigation strategies. Both analytical derivations and simulations of rough features are used to evaluate data window functions for reducing spectral leakage and to understand the impact of data detrending on biases in PSD, autocovariance function (ACF), and height-to-height covariance function measurement. A generalized Welch window was found to be best among the windows tested. Linear detrending for line-edge roughness measurement results in underestimation of the low-frequency PSD and errors in the ACF and height-to-height covariance function. Measuring multiple edges per scanning electron microscope image reduces this detrending bias.
Scattering error corrections for in situ absorption and attenuation measurements.
McKee, David; Piskozub, Jacek; Brown, Ian
2008-11-24
Monte Carlo simulations are used to establish a weighting function that describes the collection of angular scattering for the WETLabs AC-9 reflecting tube absorption meter. The equivalent weighting function for the AC-9 attenuation sensor is found to be well approximated by a binary step function with photons scattered between zero and the collection half-width angle contributing to the scattering error and photons scattered at larger angles making zero contribution. A new scattering error correction procedure is developed that accounts for scattering collection artifacts in both absorption and attenuation measurements. The new correction method does not assume zero absorption in the near infrared (NIR), does not assume a wavelength independent scattering phase function, but does require simultaneous measurements of spectrally matched particulate backscattering. The new method is based on an iterative approach that assumes that the scattering phase function can be adequately modeled from estimates of particulate backscattering ratio and Fournier-Forand phase functions. It is applied to sets of in situ data representative of clear ocean water, moderately turbid coastal water and highly turbid coastal water. Initial results suggest significantly higher levels of attenuation and absorption than those obtained using previously published scattering error correction procedures. Scattering signals from each correction procedure have similar magnitudes but significant differences in spectral distribution are observed.
Quantifying soil CO2 respiration measurement error across instruments
NASA Astrophysics Data System (ADS)
Creelman, C. A.; Nickerson, N. R.; Risk, D. A.
2010-12-01
A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Another Look at Inter-Rater Agreement. Research Report.
ERIC Educational Resources Information Center
Zwick, Rebecca
Most currently used measures of inter-rater agreement for the nominal case incorporate a correction for "chance agreement." The definition of chance agreement is not the same for all coefficients, however. Three chance-corrected coefficients are Cohen's Kappa; Scott's Pi; and the S index of Bennett, Goldstein, and Alpert, which has…
Data Reconciliation and Gross Error Detection: A Filtered Measurement Test
Himour, Y.
2008-06-12
Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.
Calvo, Roque; D'Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-09-29
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM's behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Weight-Based Classification of Raters and Rater Cognition in an EFL Speaking Test
ERIC Educational Resources Information Center
Cai, Hongwen
2015-01-01
This study is an attempt to classify raters according to their weighting patterns and explore systematic differences between rater types in the rating process. In the context of an EFL speaking test, 126 raters were classified into three types--form-oriented, balanced, and content-oriented--through cluster analyses of their weighting patterns…
Variance Estimation of Nominal-Scale Inter-Rater Reliability with Random Selection of Raters
ERIC Educational Resources Information Center
Gwet, Kilem Li
2008-01-01
Most inter-rater reliability studies using nominal scales suggest the existence of two populations of inference: the population of subjects (collection of objects or persons to be rated) and that of raters. Consequently, the sampling variance of the inter-rater reliability coefficient can be seen as a result of the combined effect of the sampling…
Effects of Marking Method and Rater Experience on ESL Essay Scores and Rater Performance
ERIC Educational Resources Information Center
Barkaoui, Khaled
2011-01-01
This study examined the effects of marking method and rater experience on ESL (English as a Second Language) essay test scores and rater performance. Each of 31 novice and 29 experienced raters rated a sample of ESL essays both holistically and analytically. Essay scores were analysed using a multi-faceted Rasch model to compare test-takers'…
Rater Types in Writing Performance Assessments: A Classification Approach to Rater Variability
ERIC Educational Resources Information Center
Eckes, Thomas
2008-01-01
Research on rater effects in language performance assessments has provided ample evidence for a considerable degree of variability among raters. Building on this research, I advance the hypothesis that experienced raters fall into types or classes that are clearly distinguishable from one another with respect to the importance they attach to…
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration
Sedrez, Juliana A.; Candotti, Cláudia T.; Rosa, Maria I. Z.; Medeiros, Fernanda S.; Marques, Mariana T.; Loss, Jefferson F.
2016-01-01
Introduction: The early evaluation of the spine in children is desirable because it is at this stage of development that the greatest changes in the body structures occur. Objective: To determine the test-retest, intra- and inter-rater reliability of the Flexicurve instrument for the evaluation of spinal curvatures in children. Method: Forty children ranging from 5 to 15 years of age were evaluated by two independent evaluators using the Flexicurve to model the spine. The agreement was evaluated using Intraclass Correlation Coefficients (ICC), Standard Error of the Measurement (SEM), and Minimal Detectable Change (MDC). Results: In relation to thoracic kyphosis, the Flexicurve was shown to have excellent correlation in terms of test-retest reliability (ICC2,2=0.87) and moderate correlation in terms of intra-(ICC2,2=0.68) and inter-rater reliability (ICC2,2=0.72). In relation to lumbar lordosis, it was shown to have moderate correlation in terms of test-retest reliability (ICC2,2=0.66) and intra- (ICC2,2=0.50) and inter-rater reliability (ICC=0.56). Conclusion: This evaluation of the reliability of the Flexicurve allows its use in school screening. However, to monitor spinal curvatures in the sagittal plane in children, complementary clinical measures are necessary. Further studies are required to investigate the concurrent validity of the instrument in order to identify its diagnostic capacity. PMID:26786078
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following:
This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Simulation of error in optical radar range measurements.
Der, S; Redman, B; Chellappa, R
1997-09-20
We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.
Examples of Detecting Measurement Errors with the QCRad VAP
Shi, Yan; Long, Charles N.
2005-07-30
The QCRad VAP is being developed to assess the data quality for the ARM radiation data collected at the Extended and ARCS facilities. In this study, we processed one year of radiation data, chosen at random, for each of the twenty SGP Extended Facilities to aid in determining the user configurable limits for the SGP sites. By examining yearly summary plots of the radiation data and the various test limits, we can show that the QCRad VAP is effective in identifying and detecting many different types of measurement errors. Examples of the analysis results will be shown in this poster presentation.
Examiner error in curriculum-based measurement of oral reading.
Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K
2014-08-01
Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
Paulsen, Robert; Gallu, Tommaso; Gilkey, David; Reiser, Raoul; Murgia, Lelia; Rosecrance, John
2015-11-01
The purpose of this study was to characterize the inter-rater reliability of two physical exposure assessment methods of the upper extremity, the Strain Index (SI) and Occupational Repetitive Actions (OCRA) Checklist. These methods are commonly used in occupational health studies and by occupational health practitioners. Seven raters used the SI and OCRA Checklist to assess task-level physical exposures to the upper extremity of workers performing 21 cheese manufacturing tasks. Inter-rater reliability was characterized using a single-measure, agreement-based intraclass correlation coefficient (ICC). Inter-rater reliability of SI assessments was moderate to good (ICC = 0.59, 95% CI: 0.45-0.73), a similar finding to prior studies. Inter-rater reliability of OCRA Checklist assessments was excellent (ICC = 0.80, 95% CI: 0.70-0.89). Task complexity had a small, but non-significant, effect on inter-rater reliability SI and OCRA Checklist scores. Both the SI and OCRA Checklist assessments possess adequate inter-rater reliability for the purposes of occupational health research and practice. The OCRA Checklist inter-rater reliability scores were among the highest reported in the literature for semi-quantitative physical exposure assessment tools of the upper extremity. The OCRA Checklist however, required more training time and time to conduct the risk assessments compared to the SI.
Plain film measurement error in acute displaced midshaft clavicle fractures
Archer, Lori Anne; Hunt, Stephen; Squire, Daniel; Moores, Carl; Stone, Craig; O’Dea, Frank; Furey, Andrew
2016-01-01
Background Clavicle fractures are common and optimal treatment remains controversial. Recent literature suggests operative fixation of acute displaced mid-shaft clavicle fractures (DMCFs) shortened more than 2 cm improves outcomes. We aimed to identify correlation between plain film and computed tomography (CT) measurement of displacement and the inter- and intraobserver reliability of repeated radiographic measurements. Methods We obtained radiographs and CT scans of patients with acute DMCFs. Three orthopedic staff and 3 residents measured radiographic displacement at time zero and 2 weeks later. The CT measurements identified absolute shortening in 3 dimensions (by subtracting the length of the fractured from the intact clavicle). We then compared shortening measured on radiographs and shortening measured in 3 dimensions on CT. Interobserver and intraobserver reliability were calculated. Results We reviewed the fractures of 22 patients. Bland–Altman repeatability coefficient calculations indicated that radiograph and CT measurements of shortening could not be correlated owing to an unacceptable amount of measurement error (6 cm). Interobserver reliability for plain radiograph measurements was excellent (Cronbach α = 0.90). Likewise, intraobserver reliabilities for plain radiograph measurements as calculated with paired t tests indicated excellent correlation (p > 0.05 in all but 1 observer [p = 0.04]). Conclusion To establish shortening as an indication for DMCF fixation, reliable measurement tools are required. The low correlation between plain film and CT measurements we observed suggests further research is necessary to establish what imaging modality reliably predicts shortening. Our results indicate weak correlation between radiograph and CT measurement of acute DMCF shortening. PMID:27438054
Sia, Isaac; Carvajal, Pamela; Carnaby-Mann, Giselle D; Crary, Michael A
2012-06-01
Video fluoroscopy is commonly used in the study of swallowing kinematics. However, various procedures used in linear measurements obtained from video fluoroscopy may contribute to increased variability or measurement error. This study evaluated the influence of calibration referent and image rotation on measurement variability for hyoid and laryngeal displacement during swallowing. Inter- and intrarater reliabilities were also estimated for hyoid and laryngeal displacement measurements across conditions. The use of different calibration referents did not contribute significantly to variability in measures of hyoid and laryngeal displacement but image rotation affected horizontal measures for both structures. Inter- and intrarater reliabilities were high. Using the 95% confidence interval as the error index, measurement error was estimated to range from 2.48 to 3.06 mm. These results address procedural decisions for measuring hyoid and laryngeal displacement in video fluoroscopic swallowing studies.
Agreement between Two Independent Groups of Raters
ERIC Educational Resources Information Center
Vanbelle, Sophie; Albert, Adelin
2009-01-01
We propose a coefficient of agreement to assess the degree of concordance between two independent groups of raters classifying items on a nominal scale. This coefficient, defined on a population-based model, extends the classical Cohen's kappa coefficient for quantifying agreement between two raters. Weighted and intraclass versions of the…
Accuracy of Surgery Clerkship Performance Raters.
ERIC Educational Resources Information Center
Littlefield, John H.; And Others
1991-01-01
Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)
Effects of Assigning Raters to Items
ERIC Educational Resources Information Center
Sykes, Robert C.; Ito, Kyoko; Wang, Zhen
2008-01-01
Student responses to a large number of constructed response items in three Math and three Reading tests were scored on two occasions using three ways of assigning raters: single reader scoring, a different reader for each response (item-specific), and three readers each scoring a rater item block (RIB) containing approximately one-third of a…
Rater Variables Associated with ITER Ratings
ERIC Educational Resources Information Center
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-01-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of…
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
Fell, Matthew; Meirte, Jill; Anthonissen, Mieke; Maertens, Koen; Pleat, Jonathon; Moortgat, Peter
2016-03-01
Objective scar assessment tools were designed to help identify problematic scars and direct clinical management. Their use has been restricted by their measurement of a single scar property and the bulky size of equipment. The Scarbase Duo(®) was designed to assess both trans-epidermal water loss (TEWL) and colour of a burn scar whilst being compact and easy to use. Twenty patients with a burn scar were recruited and measurements taken using the Scarbase Duo(®) by two observers. The Scarbase Duo(®) measures TEWL via an open-chamber system and undertakes colorimetry via narrow-band spectrophotometry, producing values for relative erythema and melanin pigmentation. Validity was assessed by comparing the Scarbase Duo(®) against the Dermalab(®) and the Minolta Chromameter(®) respectively for TEWL and colorimetry measurements. The intra-class correlation coefficient (ICC) was used to assess reliability with standard error of measurement (SEM) used to assess reproducibility of measurements. The Pearson correlation coefficient (r) was used to assess the convergent validity. The Scarbase Duo(®) TEWL mode had excellent reliability when used on scars for both intra- (ICC=0.95) and inter-rater (ICC=0.96) measurements with moderate SEM values. The erythema component of the colorimetry mode showed good reliability for use on scars for both intra-(ICC=0.81) and inter-rater (ICC=0.83) measurements with low SEM values. Pigmentation values showed excellent reliability on scar tissue for both intra- (ICC=0.97) and inter-rater (ICC=0.97) with moderate SEM values. The Scarbase Duo(®) TEWL function had excellent correlation with the Dermalab(®) (r=0.93) whilst the colorimetry erythema value had moderate correlation with the Minolta Chromameter (r=0.72). The Scarbase Duo(®) is a reliable and objective scar assessment tool, which is specifically designed for burn scars. However, for clinical use, standardised measurement conditions are recommended.
Correlates of Halo Error in Teacher Evaluation.
ERIC Educational Resources Information Center
Moritsch, Brian G.; Suter, W. Newton
1988-01-01
An analysis of 300 undergraduate psychology student ratings of teachers was undertaken to assess the magnitude of halo error and a variety of rater, ratee, and course characteristics. The raters' halo errors were significantly related to student effort in the course, previous experience with the instructor, and class level. (TJH)
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Error correction for Moiré based creep measurement system
NASA Astrophysics Data System (ADS)
Liao, Yi; Harding, Kevin G.; Nieters, Edward J.; Tait, Robert W.; Hasz, Wayne C.; Piche, Nicole
2014-05-01
Due to the high temperatures and stresses present in the high-pressure section of a gas turbine, the airfoils experience creep or radial stretching. Nowadays manufacturers are putting in place condition-based maintenance programs in which the condition of individual components is assessed to determine their remaining lives. To accurately track this creep effect and predict the impact on part life, the ability to accurately assess creep has become an important engineering challenge. One approach for measuring creep is using moiré imaging. Using pad-print technology, a grating pattern can be directly printed on a turbine bucket, and it compares against a reference pattern built in the creep measurement system to create moiré interference pattern. The authors assembled a creep measurement prototype for this application. By measuring the frequency change of the moiré fringes, it is then possible to determine the local creep distribution. However, since the sensitivity requirement for the creep measurement is very stringent (0.1 micron), the measurement result can be easily offset due to optical system aberrations, tilts and magnification. In this paper, a mechanical specimen subjected to a tensile test to induce plastic deformation up to 4% in the gage was used to evaluate the system. The results show some offset compared to the readings from a strain gage and an extensometer. By using a new grating pattern with two subset patterns, it was possible to correct these offset errors.
Introducing a new definition of a near fall: intra-rater and inter-rater reliability.
Maidan, I; Freedman, T; Tzemah, R; Giladi, N; Mirelman, A; Hausdorff, J M
2014-01-01
Near falls (NFs) are more frequent than falls, and may occur before falls, potentially predicting fall risk. As such, identification of a NF is important. We aimed to assess intra and inter-rater reliability of the traditional definition of a NF and to demonstrate the potential utility of a new definition. To this end, 10 older adults, 10 idiopathic elderly fallers, and 10 patients with Parkinson's disease (PD) walked in an obstacle course while wearing a safety harness. All walks were videotaped. Forty-nine video segments were extracted to create 2 clips each of 8.48 min. Four raters scored each event using the traditional definition and, two weeks later, using the new definition. A fifth rater used only the new definition. Intra-rater reliability was determined using Kappa (K) statistics and inter-rater reliability was determined using ICC. Using the traditional definition, three raters had poor intra-rater reliability (K<0.054, p>0.137) and one rater had moderate intra-rater reliability (K=0.624, p<0.001). With the traditional definition, inter-rater reliability between the four raters was moderate (ICC=0.667, p<0.001). In contrast, the new NF definition showed high intra-rater (K>0.601, p<0.001) and excellent inter-rater reliability (ICC=0.815, p<0.001). A priori, it is easy to distinguish falls from usual walking and NFs, but it is more challenging to distinguish NFs from obstacle negotiation and usual walking. Therefore, a more precise definition of NF is required. The results of the present study suggest that the proposed new definition increases intra and inter-rater reliability, a critical step for using NFs to quantify fall risk.
Regional distribution of measurement error in diffusion tensor imaging.
Marenco, Stefano; Rawlings, Robert; Rohde, Gustavo K; Barnett, Alan S; Honea, Robyn A; Pierpaoli, Carlo; Weinberger, Daniel R
2006-06-30
The characterization of measurement error is critical in assessing the significance of diffusion tensor imaging (DTI) findings in longitudinal and cohort studies of psychiatric disorders. We studied 20 healthy volunteers, each one scanned twice (average interval between scans of 51 +/- 46.8 days) with a single shot echo planar DTI technique. Intersession variability for fractional anisotropy (FA) and Trace (D) was represented as absolute variation (standard deviation within subjects: SDw), percent coefficient of variation (CV) and intra-class correlation coefficient (ICC). The values from the two sessions were compared for statistical significance with repeated measures analysis of variance or a non-parametric equivalent of a paired t-test. The results showed good reproducibility for both FA and Trace (CVs below 10% and ICCs at or above 0.70 in most regions of interest) and evidence of systematic global changes in Trace between scans. The regional distribution of reproducibility described here has implications for the interpretation of regional findings and for rigorous pre-processing. The regional distribution of reproducibility measures was different for SDw, CV and ICC. Each one of these measures reveals complementary information that needs to be taken into consideration when performing statistical operations on groups of DT images.
Cleffken, Berry; van Breukelen, Gerard; van Mameren, Henk; Brink, Peter; Olde Damink, Steven
2007-01-01
Increasingly, goniometry of elbow motion is used for qualification of research results. Expression of reliability is in parameters not suitable for comparison of results. We modified Bland and Altman's method, resulting in the smallest detectable differences (SDDs). Two raters measured elbow excursions in 42 individuals (144 ratings per test person) with an electronic digital inclinometer in a classical test-retest crossover study design. The SDDs were 0 +/- 4.2 degrees for active extension; 0 +/- 8.2 degrees for active flexion, both without upper arm fixation; 0 +/- 6.3 degrees for active extension; 0 +/- 5.7 degrees for active flexion; 0 +/- 7.4 degrees for passive flexion with upper arm fixation; 0 +/- 10.1 degrees for active flexion with upper arm retroflexion; and 0 +/- 8.5 degrees and 0 +/- 10.8 degrees for active and passive range of motion. Differences smaller than these SDDs found in clinical or research settings are attributable to measurement error and do not indicate improvement.
Inter- and intra-rater reliability of the GAITRite system among individuals with sub-acute stroke.
Wong, Jennifer S; Jasani, Hardika; Poon, Vivien; Inness, Elizabeth L; McIlroy, William E; Mansfield, Avril
2014-01-01
Technology-based assessment tools with semi-automated processing, such as pressure-sensitive mats used for gait assessment, may be considered to be objective; therefore it may be assumed that rater reliability is not a concern. However, user input is often required and rater reliability must be determined. The purpose of this study was to assess the inter- and intra-rater reliability of spatial and temporal characteristics of gait in stroke patients using the GAITRite system. Forty-six individuals with stroke attending in-patient rehabilitation walked across the pressure-sensitive mat 2-4 times at preferred walking speeds, with or without a gait aid. Five raters independently processed gait data. Three raters re-processed the data after a delay of at least one month. The intraclass correlation coefficients (ICC) and 95% confidence intervals of the ICC were determined for velocity, step time, step length, and step width. Inter-rater reliability for velocity, step time, and step length were high (ICC>0.90). Intra-rater reliability was generally greater than inter-rater reliability (from 0.81 to >0.99 for inter-rater versus 0.77 to >0.99 for intra-rater reliability). Overall, this study suggests that GAITRite is a reliable assessment tool; however, there still remains subjectivity in processing the data, resulting in no patients with perfect agreement between raters. Additional logic checking within the processing software or standardization of training could help to reduce potential errors in processing.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Error separation technique for measuring aspheric surface based on dual probes
NASA Astrophysics Data System (ADS)
Wei, Zhong-wei; Jing, Hong-wei; Kuang, Long; Wu, Shi-bin
2013-09-01
In this paper, we present an error separation method based on dual probes for the swing arm profilometer to calibrate the rotary table errors. Two probes and the rotation axis of swinging arm are in a plane. The scanning tracks cross each other as both probes scan the mirror edge to edge. Since the surface heights should ideally be the same at these scanning crossings, this crossings height information can be used to calibrate the rotary table errors. But the crossings height information contains the swing arm air bearing errors and measurement errors of probes. The errors seriously affect the correction accuracy of rotary table errors. The swing arm air bearing errors and measurement errors of probes are randomly distributed, we use least square method to remove these errors. In this paper, we present the geometry of the dual probe swing arm profilometer system, and the profiling pattern made by both probes. We analyze the influence the probe separation has on the measurement results. The algorithm for stitching together the scans into a surface is also presented. The difference of the surface heights at the crossings of the adjacent scans is used to find a transformation that describes the rotary table errors and then to correct for the errors. To prove the error separation method based on a dual probe can successfully calibrate the rotary table errors, we establish SAP error model and simulate the effect of the error separation method based on a dual probe on calibrating the rotary table errors.
Inter- and intra-rater agreement of static posture analysis using a mobile application
Boland, David M.; Neufeld, Eric V.; Ruddell, Jack; Dolezal, Brett A.; Cooper, Christopher B.
2016-01-01
[Purpose] To determine the intra- and inter-rater agreement of a mobile application, PostureScreen Mobile® (PSM), that assesses static standing posture. [Subjects and Methods] Three examiners with different levels of experience of assessing posture, one licensed physical therapist and two untrained undergraduate students, performed repeated postural assessments of 10 subjects, fully clothed or minimally clothed, using PSM on two nonconsecutive days. Anterior and right lateral images were captured and seventeen landmarks were identified on them. Intraclass correlation coefficients (ICCs) were calculated for each of 13 postural measures to evaluate inter-rater agreement on the first visit (fully or minimally clothed), as well as intra-rater agreement between the first and second visits (minimally clothed). [Results] Eleven postural measures were ultimately analyzed for inter- and intra-rater agreement. Inter-rater agreement was almost perfect (ICC≥0.81) for four measures and substantial (0.60
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Multiple reflections in a photoelastic modulator: errors in polarization measurement
NASA Astrophysics Data System (ADS)
Gemeiner, P.; Yang, D.; Canit, J. C.
1996-09-01
The use of a coherent light source (laser) can lead to significant errors when measurements of optical activity, magneto optical Kerr rotation, dichroism or ellipsometric parameters are down with a photoelastic modulator. In particular, a phenomenon of interferences occurs between beams arising from multiple reflections in the modulator. These interferences give rise to parasitic effects which depend on the one hand on the characteristics of the modulator and on the other hand on the wavelength of the light. A variation of temperature causes a modification of those artefacts. These have been noticed experimentally and their amplitude is in good agreement with theoretical predictions, based on a calculation of interferences. The amplitude of an artefact may reach one degree of angle in case of optical activity and is equal to five thousandth in case of measurement of a dichroism. We have shown experimentally that these effects can be cancelled by inclining the modulator with respect to the axis of the light beam or by using a new modulator with a trapezoidal section.
Is the Parkinson Anxiety Scale comparable across raters?
Forjaz, Maria João; Ayala, Alba; Martinez-Martin, Pablo; Dujardin, Kathy; Pontone, Gregory M; Starkstein, Sergio E; Weintraub, Daniel; Leentjens, Albert F G
2015-04-01
The Parkinson Anxiety Scale is a new scale developed to measure anxiety severity in Parkinson's disease specifically. It consists of three dimensions: persistent anxiety, episodic anxiety, and avoidance behavior. This study aimed to assess the measurement properties of the scale while controlling for the rater (self- vs. clinician-rated) effect. The Parkinson Anxiety Scale was administered to a cross-sectional multicenter international sample of 362 Parkinson's disease patients. Both patients and clinicians rated the patient's anxiety independently. A many-facet Rasch model design was applied to estimate and remove the rater effect. The following measurement properties were assessed: fit to the Rasch model, unidimensionality, reliability, differential item functioning, item local independency, interrater reliability (self or clinician), and scale targeting. In addition, test-retest stability, construct validity, precision, and diagnostic properties of the Parkinson Anxiety Scale were also analyzed. A good fit to the Rasch model was obtained for Parkinson Anxiety Scale dimensions A and B, after the removal of one item and rescoring of the response scale for certain items, whereas dimension C showed marginal fit. Self versus clinician rating differences were of small magnitude, with patients reporting higher anxiety levels than clinicians. The linear measure for Parkinson Anxiety Scale dimensions A and B showed good convergent construct with other anxiety measures and good diagnostic properties. Parkinson Anxiety Scale modified dimensions A and B provide valid and reliable measures of anxiety in Parkinson's disease that are comparable across raters. Further studies are needed with dimension C.
Quantifying the sources of error in measurements of urine activity
Mozley, P.D.; Kim, H.J.; McElgin, W.
1994-05-01
Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.
On the reliability and standard errors of measurement of contrast measures from the D-KEFS.
Crawford, John R; Sutherland, David; Garthwaite, Paul H
2008-11-01
A formula for the reliability of difference scores was used to estimate the reliability of Delis-Kaplan Executive Function System (D-KEFS; Delis et al., 2001) contrast measures from the reliabilities and correlations of their components. In turn these reliabilities were used to calculate standard errors of measurement. The majority of contrast measures had low reliabilities: of the 51 reliability coefficients calculated in the present study, none exceeded 0.7 and hence all failed to meet any of the criteria for acceptable reliability proposed by various experts in psychological measurement. The mean reliability of the contrast scores was 0.27, the median reliability was 0.30. The standard errors of measurement were large and, in many cases, equaled or were only marginally smaller than the contrast scores' standard deviations. The results suggest that, at present, D-KEFS contrast measures should not be used in neuropsychological decision making.
Implications of Three Causal Models for the Measurement of Halo Error.
ERIC Educational Resources Information Center
Fisicaro, Sebastiano A.; Lance, Charles E.
1990-01-01
Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)
Properties of a Proposed Approximation to the Standard Error of Measurement.
ERIC Educational Resources Information Center
Nitko, Anthony J.
An approximation formula for the standard error of measurement was recently proposed by Garvin. The properties of this approximation to the standard error of measurement are described in this paper and illustrated with hypothetical data. It is concluded that the approximation is a systematic overestimate of the standard error of measurement…
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-05-19
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.
A heteroscedastic measurement error model for method comparison data with replicate measurements.
Nawarathna, Lakshika S; Choudhary, Pankaj K
2015-03-30
Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, W. S.; Burkhart, J. F.; Kylling, A.
2015-08-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Cole, David A; Preacher, Kristopher J
2014-06-01
Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
Linden, Ariel
2015-01-01
The patient activation measure (PAM) is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve), using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE) for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum) and 44.4% (maximum). Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors
NASA Astrophysics Data System (ADS)
Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping
2016-11-01
The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.
Sonderegger, Derek L; Wang, Haonan; Huang, Yao; Clements, William H
2009-10-01
The effect that measurement error of predictor variables has on regression inference is well known in the statistical literature. However, the influence of measurement error on the ability to quantify relationships between chemical stressors and biological responses has received little attention in ecotoxicology. We present a common data-collection scenario and demonstrate that the relationship between explanatory and response variables is consistently underestimated when measurement error is ignored. A straightforward extension of the regression calibration method is to use a nonparametric method to smooth the predictor variable with respect to another covariate (e.g., time) and using the smoothed predictor to estimate the response variable. We conducted a simulation study to compare the effectiveness of the proposed method to the naive analysis that ignores measurement error. We conclude that the method satisfactorily addresses the problem when measurement error is moderate to large, and does not result in a noticeable loss of power in the case where measurement error is absent.
Ali, Zulfiqar; Yashchuk, Valeriy V.
2011-05-11
Systematic error and instrumental drift are the major limiting factors of sub-microradian slope metrology with state-of-the-art x-ray optics. Significant suppression of the errors can be achieved by using an optimal measurement strategy suggested in [Rev. Sci. Instrum. 80, 115101 (2009)]. With this series of LSBL Notes, we report on development of an automated, kinematic, rotational system that provides fully controlled flipping, tilting, and shifting of a surface under test. The system is integrated into the Advanced Light Source long trace profiler, LTP-II, allowing for complete realization of the advantages of the optimal measurement strategy method. We provide details of the system’s design, operational control and data acquisition. The high performance of the system is demonstrated via the results of high precision measurements with a spherical test mirror.
Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement
NASA Astrophysics Data System (ADS)
Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui
2017-01-01
Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.
Errors in scatterometer-radiometer wind measurement due to rain
NASA Technical Reports Server (NTRS)
Moore, R. K.; Chaudhry, A. H.; Birrer, I. J.
1983-01-01
The behavior of radiometer corrections for the scatterometer is investigated by simulating simple situations using footprint sizes comparable with those used in the SEASAT-1 experiment and also actual footprints and rain rates from a hurricane observed by the SEASAT-1 system. The effects on correction due to attenuation and wind speed gradients are examined independently and jointly. It is shown that the error in the wind-speed estimate can be as large as 200% at higher wind speeds. The worst error occurs when the scatterometer footprint overlaps two or more radiometer footprints and the attenuation in the scatterometer footprint differs greatly from those in parts of the radiometer footprints. This problem could be overcome by using a true radiometer-scatterometer system having identical coincident footprints comparable in size with typical rain cells.
Proxies and Other External Raters: Methodological Considerations
Snow, A Lynn; Cook, Karon F; Lin, Pay-Shin; Morgan, Robert O; Magaziner, Jay
2005-01-01
Objective The purpose of this paper is to introduce researchers to the measurement and subsequent analysis considerations involved when using externally rated data. We will define and describe two categories of externally rated data, recommend methodological approaches for analyzing and interpreting data in these two categories, and explore factors affecting agreement between self-rated and externally rated reports. We conclude with a discussion of needs for future research. Data Sources/Study Setting Data sources for this paper are previous published studies and reviews comparing self-rated with externally rated data. Study Design/Data Collection/Extraction Methods This is a psychometric conceptual paper. Principal Findings We define two types of externally rated data: proxy data and other-rated data. Proxy data refer to those collected from someone who speaks for a patient who cannot, will not, or is unavailable to speak for him or herself, whereas we use the term other-rater data to refer to situations in which the researcher collects ratings from a person other than the patient to gain multiple perspectives on the assessed construct. These two types of data differ in the way the measurement model is defined, the definition of the gold standard against which the measurements are validated, the analysis strategies appropriately used, and how the analyses are interpreted. There are many factors affecting the discrepancies between self- and external ratings, including characteristics of the patient, the proxy, and of the rated construct. Several psychological theories can be helpful in predicting such discrepancies. Conclusions Externally rated data have an important place in health services research, but use of such data requires careful consideration of the nature of the data and how it will be analyzed and interpreted. PMID:16179002
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441
Henry, Sharon M.; Van Dillen, Linda R.; Trombley, Andrea R.; Dee, Justine M.; Bunn, Janice Y.
2013-01-01
Observational cross sectional study. To examine the inter-rater reliability of novice raters in using the Movement System Impairment (MSI) approach system and to explore the patterns of disagreement in classification errors. The inter-rater reliability of individual tests items used in the MSI approach is moderate to good; however, the reliability of the classification algorithm has been tested only preliminarily. Using previously recorded patient data (n = 21), 13 novice raters classified patients according to the MSI schema. The overall percent agreement using the kappa statistic as well as the agreement/disagreement among pair-wise comparisons in classification assignments were examined. There was an overall 87.4% agreement in the pairs of classification judgments with a kappa coefficient of 0.81 (95% CI: 0.79, 0.83). Raters were most likely to agree on the classification of Flexion (100%) and least likely to agree on the classification of Rotation (84%). The MSI classification algorithm can be learned by novice users and with training, their inter-rater reliability in applying the algorithm for classification judgments is good and similar to that reported in other studies. However, some degree of error persists in the classification decision-making associated with the MSI system, in particular for the Rotation category. PMID:22796388
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1980-01-01
Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.
Compensation method for the alignment angle error of a gear axis in profile deviation measurement
NASA Astrophysics Data System (ADS)
Fang, Suping; Liu, Yongsheng; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryuhei
2013-05-01
In the precision measurement of involute helical gears, the alignment angle error of a gear axis, which was caused by the assembly error of a gear measuring machine, will affect the measurement accuracy of profile deviation. A model of the involute helical gear is established under the condition that the alignment angle error of the gear axis exists. Based on the measurement theory of profile deviation, without changing the initial measurement method and data process of the gear measuring machine, a compensation method is proposed for the alignment angle error of the gear axis that is included in profile deviation measurement results. Using this method, the alignment angle error of the gear axis can be compensated for precisely. Some experiments that compare the residual alignment angle error of a gear axis after compensation for the initial alignment angle error were performed to verify the accuracy and feasibility of this method. Experimental results show that the residual alignment angle error of a gear axis included in the profile deviation measurement results is decreased by more than 85% after compensation, and this compensation method significantly improves the measurement accuracy of the profile deviation of involute helical gear.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Direct Behavior Rating: Considerations for Rater Accuracy
ERIC Educational Resources Information Center
Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.
2014-01-01
Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…
Measure short separation for space debris based on radar angle error measurement information
NASA Astrophysics Data System (ADS)
Zhang, Yao; Wang, Qiao; Zhou, Lai-jian; Zhang, Zhuo; Li, Xiao-long
2016-11-01
With the increasingly frequent human activities in space, number of dead satellites and space debris has increased dramatically, bring greater risks to the available spacecraft, however, the current widespread use of measuring equipment between space target has a lot of problems, such as high development costs or the limited conditions of use. To solve this problem, use radar multi-target measure error information to the space, and combining the relationship between target and the radar station point of view, building horizontal distance decoding model. By adopting improved signal quantization digit, timing synchronization and outliers processing method, improve the measurement precision, satisfies the requirement of multi-objective near distance measurements, and the using efficiency is analyzed. By conducting the validation test, test the feasibility and effectiveness of the proposed methods.
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1979-01-01
The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports
ERIC Educational Resources Information Center
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary
2014-01-01
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Detecting bit-flip errors in a logical qubit using stabilizer measurements
Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.
2015-01-01
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.
Significance of gauge line error in orifice measurement
Bowen, J.W.
1995-12-01
Pulsation induced gauge line amplification can cause errors in the recorded differential signal used to calculate flow. Its presence may be detected using dual transmitters (one connected at the orifice taps, the other at the end of the gauge lines) and comparing the relative peak to peak amplitudes. Its affect on recorded differential may be determined by averaging both signals with a PC based data acquisition and analysis system. Remedial action is recommended in all cases where amplification is detected. Use of close connect, full opening manifolds, is suggested to decouple the gauge lines` resonant frequency from that of the excitation`s, by positioning the recording device as close to the process signal`s origin as possible.
Automated Essay Scoring With e-rater[R] V.2
ERIC Educational Resources Information Center
Attali, Yigal; Burstein, Jill
2006-01-01
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
NASA Astrophysics Data System (ADS)
Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.
2015-12-01
Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.
Compensation method for the alignment angle error in pitch deviation measurement
NASA Astrophysics Data System (ADS)
Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei
2016-05-01
When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.
Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi
2012-12-20
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Zhu, Minhao; Wei, Haoyun; Wu, Xuejian; Li, Yan
2014-08-01
Periodic error is the major problem that limits the accuracy of heterodyne interferometry. A traceable system for periodic error measurement is developed based on a nonlinearity free Fabry-Perot (F-P) interferometer. The displacement accuracy of the F-P interferometer is 0.49 pm at 80 ms averaging time, with the measurement results referenced to an optical frequency comb. Experimental comparison between the F-P interferometer and a commercial heterodyne interferometer is carried out and it shows that the first harmonic periodic error dominates in the commercial heterodyne interferometer with an error amplitude of 4.64 nm.
Steinsvåg, Kjersti; Bråtveit, Magne; Moen, Bente E; Kromhout, Hans
2007-01-01
Objectives To evaluate the reliability of an expert team assessing exposure to carcinogens in the offshore petroleum industry and to study how the information provided influenced the agreement among raters. Methods Eight experts individually assessed the likelihood of exposure for combinations of 17 carcinogens, 27 job categories and four time periods (1970–1979, 1980–1989, 1990–1999 and 2000–2005). Each rater assessed 1836 combinations based on summary documents on carcinogenic agents, which included descriptions of sources of exposure and products, descriptions of work processes carried out within the different job categories, and monitoring data. Inter‐rater agreement was calculated using Cohen's kappa index and single and average score intraclass correlation coefficients (ICC) (ICC(2,1) and ICC(2,8), respectively). Differences in inter‐rater agreement for time periods, raters, International Agency for Research on Cancer groups and the amount of information provided were consequently studied. Results Overall, 18% of the combinations were denoted as possible exposure, and 14% scored probable exposure. Stratified by the 17 carcinogenic agents, the probable exposure prevalence ranged from 3.8% for refractory ceramic fibres to 30% for crude oil. Overall mean kappa was 0.42 (ICC(2,1) = 0.62 and ICC(2,8) = 0.93). Providing limited quantitative measurement data was associated with less agreement than for equally well described carcinogens without sampling data. Conclusion The overall κ and single‐score ICC indicate that the raters agree on exposure estimates well above the chance level. The levels of inter‐rater agreement were higher than in other comparable studies. The average score ICC indicates reliable mean estimates and implies that sufficient raters were involved. The raters seemed to have enough documentation on which to base their estimates, but provision of limited monitoring data leads to more incongruence among raters. Having real
Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea
NASA Astrophysics Data System (ADS)
Shin, S.; Kim, Y.; Jung, C.
2010-12-01
The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
Quantization Error Reduction in the Measurement of Fourier Intensity for Phase Retrieval
NASA Astrophysics Data System (ADS)
Yang, Shiyuan; Takajo, Hiroaki
2004-08-01
The quantization error in the measurement of Fourier intensity for phase retrieval is discussed and a multispectra method is proposed to reduce this error. The Fourier modulus used for phase retrieval is usually obtained by measuring Fourier intensity with a digital device. Therefore, quantization error in the measurement of Fourier intensity leads to an error in the reconstructed object when iterative Fourier transform algorithms are used. The multispectra method uses several Fourier intensity distributions for a number of measurement ranges to generate a Fourier intensity distribution with a low quantization error. Simulations show that the multispectra method is effective in retrieving objects with real or complex distributions when the iterative hybrid input-output algorithm (HIO) is used.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
NASA Astrophysics Data System (ADS)
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
Thomas, Laine; Stefanski, Leonard A.; Davidian, Marie
2013-01-01
In clinical studies, covariates are often measured with error due to biological fluctuations, device error and other sources. Summary statistics and regression models that are based on mismeasured data will differ from the corresponding analysis based on the “true” covariate. Statistical analysis can be adjusted for measurement error, however various methods exhibit a tradeo between convenience and performance. Moment Adjusted Imputation (MAI) is method for measurement error in a scalar latent variable that is easy to implement and performs well in a variety of settings. In practice, multiple covariates may be similarly influenced by biological fluctuastions, inducing correlated multivariate measurement error. The extension of MAI to the setting of multivariate latent variables involves unique challenges. Alternative strategies are described, including a computationally feasible option that is shown to perform well. PMID:24072947
A semiparametric copula method for Cox models with covariate measurement error.
Kim, Sehee; Li, Yi; Spiegelman, Donna
2016-01-01
We consider measurement error problem in the Cox model, where the underlying association between the true exposure and its surrogate is unknown, but can be estimated from a validation study. Under this framework, one can accommodate general distributional structures for the error-prone covariates, not restricted to a linear additive measurement error model or Gaussian measurement error. The proposed copula-based approach enables us to fit flexible measurement error models, and to be applicable with an internal or external validation study. Large sample properties are derived and finite sample properties are investigated through extensive simulation studies. The methods are applied to a study of physical activity in relation to breast cancer mortality in the Nurses' Health Study.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.
Mints, M.Ya.; Chinkov, V.N.
1995-09-01
Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.
The effect of proficiency level on measurement error of range of motion
Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari
2016-01-01
[Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
ERIC Educational Resources Information Center
Wang, Zhen; Yao, Lihua
2013-01-01
The current study used simulated data to investigate the properties of a newly proposed method (Yao's rater model) for modeling rater severity and its distribution under different conditions. Our study examined the effects of rater severity, distributions of rater severity, the difference between item response theory (IRT) models with rater effect…
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
2017-01-01
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors. PMID:28381982
Errors and uncertainties in the measurement of ultrasonic wave attenuation and phase velocity.
Kalashnikov, Alexander N; Challis, Richard E
2005-10-01
This paper presents an analysis of the error generation mechanisms that affect the accuracy of measurements of ultrasonic wave attenuation coefficient and phase velocity as functions of frequency. In the first stage of the analysis we show that electronic system noise, expressed in the frequency domain, maps into errors in the attenuation and the phase velocity spectra in a highly nonlinear way; the condition for minimum error is when the total measured attenuation is around 1 Neper. The maximum measurable total attenuation has a practical limit of around 6 Nepers and the minimum measurable value is around 0.1 Neper. In the second part of the paper we consider electronic noise as the primary source of measurement error; errors in attenuation result from additive noise whereas errors in phase velocity result from both additive noise and system timing jitter. Quantization noise can be neglected if the amplitude of the additive noise is comparable with the quantization step, and coherent averaging is employed. Experimental results are presented which confirm the relationship between electronic noise and measurement errors. The analytical technique is applicable to the design of ultrasonic spectrometers, formal assessment of the accuracy of ultrasonic measurements, and the optimization of signal processing procedures to achieve a specified accuracy.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
Tosteson, Tor D; Buzas, Jeffrey S; Demidenko, Eugene; Karagas, Margaret
2003-04-15
Covariate measurement error is often a feature of scientific data used for regression modelling. The consequences of such errors include a loss of power of tests of significance for the regression parameters corresponding to the true covariates. Power and sample size calculations that ignore covariate measurement error tend to overestimate power and underestimate the actual sample size required to achieve a desired power. In this paper we derive a novel measurement error corrected power function for generalized linear models using a generalized score test based on quasi-likelihood methods. Our power function is flexible in that it is adaptable to designs with a discrete or continuous scalar covariate (exposure) that can be measured with or without error, allows for additional confounding variables and applies to a broad class of generalized regression and measurement error models. A program is described that provides sample size or power for a continuous exposure with a normal measurement error model and a single normal confounder variable in logistic regression. We demonstrate the improved properties of our power calculations with simulations and numerical studies. An example is given from an ongoing study of cancer and exposure to arsenic as measured by toenail concentrations and tap water samples.
Ambient Temperature Changes and the Impact to Time Measurement Error
NASA Astrophysics Data System (ADS)
Ogrizovic, V.; Gucevic, J.; Delcev, S.
2012-12-01
Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.
Mean-square error due to gradiometer field measuring devices.
Hatsell, C P
1991-06-01
Gradiometers use spatial common mode magnetic field rejection to reduce interference from distant sources. They also introduce distortion that can be severe, rendering experimental data difficult to interpret. Attempts to recover the measured magnetic field from the gradiometer output will be plagued by the nonexistence of a spatial function for deconvolution (except for first-order gradiometers), and by the high-pass nature of the spatial transform that emphasizes high spatial frequency noise. Goals of a design for a facility for measuring biomagnetic fields should be an effective shielded room and a field detector employing a first-order gradiometer.
Measurement, Sampling, and Equating Errors in Large-Scale Assessments
ERIC Educational Resources Information Center
Wu, Margaret
2010-01-01
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Defining uncertainty and error in planktic foraminiferal oxygen isotope measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C. M.
2017-02-01
Foraminifera are the backbone of paleoceanography. Planktic foraminifera are one of the leading tools for reconstructing water column structure. However, there are unconstrained variables when dealing with uncertainty in the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate uncertainty in oxygen isotope measurements. FIRM uses parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects to produce synthetic isotope data in a manner reflecting natural processes. Reproducibility is then tested using Monte Carlo simulations. Importantly, this is not an attempt to fully model the entire complicated process of foraminiferal calcification; instead, we are trying to include only enough parameters to estimate the uncertainty in foraminiferal δ18O records. Two well-constrained empirical data sets are simulated successfully, demonstrating the validity of our model. The results from a series of experiments with the model show that reproducibility is not only largely controlled by the number of individuals in each measurement but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. FIRM is a tool to estimate isotopic uncertainty values and to explore the impact of myriad factors on the fidelity of paleoceanographic records, particularly for the Holocene.
Position error correction in absolute surface measurement based on a multi-angle averaging method
NASA Astrophysics Data System (ADS)
Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin
2017-04-01
We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.
Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith
2013-09-01
Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.
Jung, In-Gui; Yu, Il-Young; Kim, Soo-Yong; Lee, Dong-Kyu; Oh, Jae-Seop
2015-06-01
[Purpose] This study investigated the reliability of ankle dorsiflexion passive range of motion (DF-PROM) measurements obtained using a goniometer and Biodex dynamometer in stroke patients. [Subjects] Fifteen stroke patients participated in this study. [Methods] Ankle DF-PROM was assessed using a goniometer and Biodex dynamometer. Ankle DF-PROM was measured during two sessions with 7 days between tests. Intraclass correlation coefficient, standard error of measurement, and minimal detectable change values were used to assess the reliability of measurements obtained using both instruments. [Results] The intra-rater reliability for ankle DF-PROM using the goniometer was moderate and good for the two raters, while using the Biodex dynamometer, it was good for both raters. Inter-rater reliability using the goniometer was moderate; using the Biodex, it was good. [Conclusion] Both intra- and inter-reliability measurements of ankle DF-PROM were higher using a Biodex dynamometer than with a goniometer.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study
NASA Astrophysics Data System (ADS)
Bogren, W.; Kylling, A.; Burkhart, J. F.
2015-12-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements
NASA Astrophysics Data System (ADS)
Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.
2012-12-01
This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.
Inter-Rater Reliability and Intra-Rater Reliability of Assessing the 2-Minute Push-Up Test.
Fielitz, Lynn; Coelho, Jeffrey; Horne, Thomas; Brechue, William
2016-02-01
The purpose of this study was to assess inter-rater reliability and intra-rater reliability of the 2-minute, 90° push-up test as utilized in the Army Physical Fitness Test. Analysis of rater assessment reliability included both total score agreement and agreement across individual push-up repetitions. This study utilized 8 Raters who assessed 15 different videotaped push-up performances over 4 iterations separated by a minimum of 1 week. The 15 push-up participants were videotaped during the semiannual Army Physical Fitness Test. Each Rater randomly viewed the 15 push-up and verbally responded with a "yes" or "no" to each push-up repetition. The data generated were analyzed using the Pearson product-moment correlation as well as the kappa, modified kappa and the intra-class correlation coefficient (3,1). An attribute agreement analysis was conducted to determine the percent of inter-rater and intra-rater agreement across individual push-ups.The results indicated that Raters varied a great deal in assessing push-ups. Over the 4 trials of 15 participants, the overall scores of the Raters varied between 3.0 and 35.7 push-ups. Post hoc comparisons found that there was significant increase in the grand mean of push-ups from trials 1-3 to trial 4 (p < 0.05). Also, there was a significant difference among raters over the 4 trials (p < 0.05). Pearson correlation coefficients for inter-rater and intra-rater reliability identified inter-rater reliability coefficients were between 0.10 and 0.97. Intra-rater coefficients were between 0.48 and 0.99. Intra-rater agreement for individual push-up repetitions ranged from 41.8% to 84.8%. The results indicated that the raters failed to assess the same push-up repetition with the same score (below 70% agreement) as well as failed to agree when viewed between raters (29%). Interestingly, as previously mentioned, scores on trial 4 increased significantly which might have been caused by rater drift or that the Raters did not maintain
Measurement error associated with surveys of fish abundance in Lake Michigan
Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.
2002-01-01
In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.
Error Measurements in an Acousto-Optic Tunable Filter Fiber Bragg Grating Sensor System
1994-05-01
Acousto - Optic Tunable Filter--Fiber Bragg Grating (AOTF-FBG) system. This analysis was targeted to investigate the measurement error in the AOTF-FBG system...Fiber bragg grating, Wavelength division multiplexing, Acousto - optic tunable filter.
Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy
NASA Technical Reports Server (NTRS)
Hoenk, M. E.
1994-01-01
Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.
Violation of Heisenberg's error-disturbance uncertainty relation in neutron-spin measurements
NASA Astrophysics Data System (ADS)
Sulyok, Georg; Sponar, Stephan; Erhart, Jacqueline; Badurek, Gerald; Ozawa, Masanao; Hasegawa, Yuji
2013-08-01
In its original formulation, Heisenberg's uncertainty principle dealt with the relationship between the error of a quantum measurement and the thereby induced disturbance on the measured object. Meanwhile, Heisenberg's heuristic arguments have turned out to be correct only for special cases. An alternative universally valid relation was derived by Ozawa in 2003. Here, we demonstrate that Ozawa's predictions hold for projective neutron-spin measurements. The experimental inaccessibility of error and disturbance claimed elsewhere has been overcome using a tomographic method. By a systematic variation of experimental parameters in the entire configuration space, the physical behavior of error and disturbance for projective spin-(1)/(2) measurements is illustrated comprehensively. The violation of Heisenberg's original relation, as well as the validity of Ozawa's relation become manifest. In addition, our results conclude that the widespread assumption of a reciprocal relation between error and disturbance is not valid in general.
A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis
Jiao, Yan
2016-01-01
Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963
Image pre-filtering for measurement error reduction in digital image correlation
NASA Astrophysics Data System (ADS)
Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing
2015-02-01
In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random
On the errors in measuring the particle density by the light absorption method
Ochkin, V. N.
2015-04-15
The accuracy of absorption measurements of the density of particles in a given quantum state as a function of the light absorption coefficient is analyzed. Errors caused by the finite accuracy in measuring the intensity of the light passing through a medium in the presence of different types of noise in the recorded signal are considered. Optimal values of the absorption coefficient and the factors capable of multiplying errors when deviating from these values are determined.
Phase-modulation method for AWG phase-error measurement in the frequency domain.
Takada, Kazumasa; Hirose, Tomohiro
2009-12-15
We report a phase-modulation method for measuring arrayed waveguide grating (AWG) phase error in the frequency domain. By combining the method with a digital sampling technique that we have already reported, we can measure the phase error within an accuracy of +/-0.055 rad for the center 90% waveguides in the array even when no carrier frequencies are generated in the beat signal from the interferometer.
NASA Astrophysics Data System (ADS)
Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.
1994-06-01
Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.
NASA Technical Reports Server (NTRS)
Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.
1994-01-01
Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
ERIC Educational Resources Information Center
Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu
2015-01-01
There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…
Functional and Structural Methods with Mixed Measurement Error and Misclassification in Covariates.
Yi, Grace Y; Ma, Yanyuan; Spiegelman, Donna; Carroll, Raymond J
2015-06-01
Covariate measurement imprecision or errors arise frequently in many areas. It is well known that ignoring such errors can substantially degrade the quality of inference or even yield erroneous results. Although in practice both covariates subject to measurement error and covariates subject to misclassification can occur, research attention in the literature has mainly focused on addressing either one of these problems separately. To fill this gap, we develop estimation and inference methods that accommodate both characteristics simultaneously. Specifically, we consider measurement error and misclassification in generalized linear models under the scenario that an external validation study is available, and systematically develop a number of effective functional and structural methods. Our methods can be applied to different situations to meet various objectives.
Biggs, Adam T
2017-03-28
Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.
Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.
Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R
2015-01-02
Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications.
Statistical and systematic errors in redshift-space distortion measurements from large surveys
NASA Astrophysics Data System (ADS)
Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.
2012-12-01
We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.
The estimation error covariance matrix for the ideal state reconstructor with measurement noise
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.
Experimental Test of Error-Disturbance Uncertainty Relations by Weak Measurement
NASA Astrophysics Data System (ADS)
Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi
2014-01-01
We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.
de Araujo, T L; Arcuri, E A; Martins, E
1998-04-01
According to the International Council of Nurses the measurement of blood pressure is the procedure most performed by nurses in all the world. The aim of this study is to analyse the polemical aspects of instruments used in blood pressure measurement. Considering the analyses of the literature and the American Heart Association Recommendations, the main source of errors when measuring blood pressure are discussed.
How reproducibly can human ear ossicles be measured? A study of inter-observer error.
Flohr, Stefan; Leckelt, Jasmin; Kierdorf, Uwe; Kierdorf, Horst
2010-12-01
Ear ossicles have thus far received little attention in biological anthropology. For the use of these bones as a source of biological information, it is important to know how reproducibly they can be measured. We determined inter-observer errors for measurements recorded by two observers on mallei (N = 119) and incudes (N = 124) obtained from human skeletons recovered from an early medieval cemetery in southern Germany. Measurements were taken on-screen on images of the bones obtained with a digital microscope. In the case of separately acquired images, mean inter-observer error ranged between 0.50 and 9.59% (average: 2.63%) for malleus measurements and between 0.67 and 7.11% (average: 2.01%) for incus measurements. Coefficients of reliability ranged between 0.72 and 0.99 for the malleus measurements and between 0.61 and 0.98 for those of the incus. Except for one incus measurement, readings performed by the two observers on the same set of photographs produced lower inter-observer errors and higher coefficients of reliability than the method involving separate acquisition of images by the observers. Across all linear measurements, absolute inter-observer error was independent of the mean size of the measured variable for both bones. So far, studies on human ear ossicles have largely neglected the issue of measurement error and its potential implication for the interpretation of the data. Knowledge of measurement error is of special importance if results obtained by different researchers are combined into a single database. It is, therefore, suggested that the reproducibility of measurements should be addressed in all future studies of ear ossicles.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
Nystrom, E.A.; Oberg, K.A.; Rehmann, C.R.; ,
2002-01-01
Acoustic Doppler current profilers (ADCPs) provide a promising method for measuring surface-water turbulence because they can provide data from a large spatial range in a relatively short time with relative ease. Some potential sources of errors in turbulence measurements made with ADCPs include inaccuracy of Doppler-shift measurements, poor temporal and spatial measurement resolution, and inaccuracy of multi-dimensional velocities resolved from one-dimensional velocities measured at separate locations. Results from laboratory measurements of mean velocity and turbulence statistics made with two pulse-coherent ADCPs in 0.87 meters of water are used to illustrate several of inherent sources of error in ADCP turbulence measurements. Results show that processing algorithms and beam configurations have important effects on turbulence measurements. ADCPs can provide reasonable estimates of many turbulence parameters; however, the accuracy of turbulence measurements made with commercially available ADCPs is often poor in comparison to standard measurement techniques.
Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.
2005-01-01
As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.
Phase error analysis and compensation considering ambient light for phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing
2014-04-01
The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.
Muralikrishnan, B; Blackburn, C; Sawyer, D; Phillips, S; Bridges, R
2010-01-01
We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder's error map to improve the tracker's angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
NASA Astrophysics Data System (ADS)
Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang; Hwang, Ching-Shiang
2016-08-01
The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.
Error analysis of angular resolution for direct intercepting measurement laser warning equipment
NASA Astrophysics Data System (ADS)
Che, Jinxi; Zhang, Jinchun; Wang, Hongjun; Cheng, Bin
2016-11-01
The accurate warning and reconnaissance to incoming laser signal is the presupposition of electro-optical jamming. However, the error of angular resolution of laser warning equipment directly affects the accuracy of warning. In this paper, the working mechanism of direct intercepting measurement laser warning equipment was analyzed. Then, the structure of its detector array system and the causes of error of angular resolution were analyzed. At different distance, the resolution errors of laser warning equipment with different detecting unit were calculated. The conclusion has some reference value to test and detect of such equipment.
Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.
2012-01-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
Error reduction by combining strapdown inertial measurement units in a baseball stitch
NASA Astrophysics Data System (ADS)
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.
Effect of patient positions on measurement errors of the knee-joint space on radiographs
NASA Astrophysics Data System (ADS)
Gilewska, Grazyna
2001-08-01
Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.
Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S
2016-02-01
One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors
Estimation of errors in diffraction data measured by CCD area detectors
Waterman, David; Evans, Gwyndaf
2010-01-01
Current methods for diffraction-spot integration from CCD area detectors typically underestimate the errors in the measured intensities. In an attempt to understand fully and identify correctly the sources of all contributions to these errors, a simulation of a CCD-based area-detector module has been produced to address the problem of correct handling of data from such detectors. Using this simulation, it has been shown how, and by how much, measurement errors are underestimated. A model of the detector statistics is presented and an adapted summation integration routine that takes this into account is shown to result in more realistic error estimates. In addition, the effect of correlations between pixels on two-dimensional profile fitting is demonstrated and the problems surrounding improvements to profile-fitting algorithms are discussed. In practice, this requires knowledge of the expected correlation between pixels in the image. PMID:27006649
Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s.
Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A
2004-10-01
Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M(1)) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6-4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2017-03-01
We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.
Determination of error measurement by means of the basic magnetization curve
NASA Astrophysics Data System (ADS)
Lankin, M. V.; Lankin, A. M.
2016-04-01
The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Improving Creativity Performance Assessment: A Rater Effect Examination with Many Facet Rasch Model
ERIC Educational Resources Information Center
Hung, Su-Pin; Chen, Po-Hsi; Chen, Hsueh-Chih
2012-01-01
Product assessment is widely applied in creative studies, typically as an important dependent measure. Within this context, this study had 2 purposes. First, the focus of this research was on methods for investigating possible rater effects, an issue that has not received a great deal of attention in past creativity studies. Second, the…
Development of AN Optical Measuring System for Geometric Errors of a Miniaturized Machine Tool
NASA Astrophysics Data System (ADS)
Kweon, Sung-Hwan; Liu, Yu; Lee, Jae-Ha; Kim, Young-Suk; Yang, Seung-Han
Recently, miniaturized machine tools (mMT) have become a promising micro/meso-mechanical manufacturing technique to overcome the material limitation and produce complex 3D meso-scale components with higher accuracy. To achieve sub-micron accuracy, geometric errors of a miniaturized machine tool should be identified and compensated. An optic multi-degree-of-freedom (DOF) measuring system, composed of one laser diode, two beam splitters and three position sensing detectors (PSDs), is proposed for simultaneous measurement of horizontal straightness, vertical straightness, pitch, yaw and roll errors along a moving axis of mMT. Homogeneous transformation matrix (HTM) is used to derive the relationship between the readings of PSDs and geometric errors, and an error estimation algorithm is presented to calculate the geometric errors. Simulation is carried out to prove the estimation accuracy of this algorithm. In theory, the measurement resolution of this proposed system can reach up to 0.03 μm and 0.06 arcsec for translational and rotational errors, respectively.
Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2009-01-01
Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.
NASA Astrophysics Data System (ADS)
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Rater Accuracy and Training Group Effects in Expert- and Supervisor-Based Monitoring Systems
ERIC Educational Resources Information Center
Baird, Jo-Anne; Meadows, Michelle; Leckie, George; Caro, Daniel
2017-01-01
This study evaluated rater accuracy with rater-monitoring data from high stakes examinations in England. Rater accuracy was estimated with cross-classified multilevel modelling. The data included face-to-face training and monitoring of 567 raters in 110 teams, across 22 examinations, giving a total of 5500 data points. Two rater-monitoring systems…
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-01-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707
Reduction of positional errors in a four-point probe resistance measurement
NASA Astrophysics Data System (ADS)
Worledge, D. C.
2004-03-01
A method for reducing resistance errors due to inaccuracy in the positions of the probes in a collinear four-point probe resistance measurement of a thin film is presented. By using a linear combination of two measurements which differ by interchange of the I- and V- leads, positional errors can be eliminated to first order. Experimental data measured using microprobes show a substantial reduction in absolute error from 3.4% down to 0.01%-0.1%, and an improvement in precision by a factor of 2-4. The application of this technique to the current-in-plane tunneling method to measure electrical properties of unpatterned magnetic tunnel junction wafers is discussed.
[Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].
Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong
2015-11-01
A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
NASA Astrophysics Data System (ADS)
Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.
2016-09-01
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major
Measurement of centering error for probe of swing arm profilometer using a spectral confocal sensor
NASA Astrophysics Data System (ADS)
Chen, Lin; Jing, Hongwei; Wei, Zhongwei; Cao, Xuedong
2015-02-01
A spectral confocal sensor was used to measure the centering error for probe of swing arm profilometer (SAP). The feasibility of this technology was proved through simulation and experiment. The final measurement results was also analyzed to evaluate the advantages and disadvantages of this technology.
Systematic Errors in the Measurement of Emissivity Caused by Directional Effects
NASA Astrophysics Data System (ADS)
Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan
2003-04-01
Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use of the infrared 8 -14- μm band. This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.
Effects of cosine error in irradiance measurements from field ocean color radiometers.
Zibordi, Giuseppe; Bulgarelli, Barbara
2007-08-01
The cosine error of in situ seven-channel radiometers designed to measure the in-air downward irradiance for ocean color applications was investigated in the 412-683 nm spectral range with a sample of three instruments. The interchannel variability of cosine errors showed values generally lower than +/-3% below 50 degrees incidence angle with extreme values of approximately 4-20% (absolute) at 50-80 degrees for the channels at 412 and 443 nm. The intrachannel variability, estimated from the standard deviation of the cosine errors of different sensors for each center wavelength, displayed values generally lower than 2% for incidence angles up to 50 degrees and occasionally increasing up to 6% at 80 degrees. Simulations of total downward irradiance measurements, accounting for average angular responses of the investigated radiometers, were made with an accurate radiative transfer code. The estimated errors showed a significant dependence on wavelength, sun zenith, and aerosol optical thickness. For a clear sky maritime atmosphere, these errors displayed values spectrally varying and generally within +/-3%, with extreme values of approximately 4-10% (absolute) at 40-80 degrees sun zenith for the channels at 412 and 443 nm. Schemes for minimizing the cosine errors have also been proposed and discussed.
Rater Training to Support High-Stakes Simulation-Based Assessments
Feldman, Moshe; Lazzara, Elizabeth H.; Vanderbilt, Allison A.; DiazGranados, Deborah
2013-01-01
Competency-based assessment and an emphasis on obtaining higher-level outcomes that reflect physicians’ ability to demonstrate their skills has created a need for more advanced assessment practices. Simulation-based assessments provide medical education planners with tools to better evaluate the 6 Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) core competencies by affording physicians opportunities to demonstrate their skills within a standardized and replicable testing environment, thus filling a gap in the current state of assessment for regulating the practice of medicine. Observational performance assessments derived from simulated clinical tasks and scenarios enable stronger inferences about the skill level a physician may possess, but also introduce the potential of rater errors into the assessment process. This article reviews the use of simulation-based assessments for certification, credentialing, initial licensure, and relicensing decisions and describes rater training strategies that may be used to reduce rater errors, increase rating accuracy, and enhance the validity of simulation-based observational performance assessments. PMID:23280532
Rater training to support high-stakes simulation-based assessments.
Feldman, Moshe; Lazzara, Elizabeth H; Vanderbilt, Allison A; DiazGranados, Deborah
2012-01-01
Competency-based assessment and an emphasis on obtaining higher-level outcomes that reflect physicians' ability to demonstrate their skills has created a need for more advanced assessment practices. Simulation-based assessments provide medical education planners with tools to better evaluate the 6 Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) core competencies by affording physicians opportunities to demonstrate their skills within a standardized and replicable testing environment, thus filling a gap in the current state of assessment for regulating the practice of medicine. Observational performance assessments derived from simulated clinical tasks and scenarios enable stronger inferences about the skill level a physician may possess, but also introduce the potential of rater errors into the assessment process. This article reviews the use of simulation-based assessments for certification, credentialing, initial licensure, and relicensing decisions and describes rater training strategies that may be used to reduce rater errors, increase rating accuracy, and enhance the validity of simulation-based observational performance assessments.
NASA Astrophysics Data System (ADS)
Xu, Xiaohai; Su, Yong; Zhang, Qingchuan
2017-01-01
The measurement accuracy using the digital image correlation (DIC) method in local deformations such as the Portevin-Le Chatelier bands, the deformations near the gap, and the crack tips has raised a major concern. The measured displacement and strain results are heavily affected by the calculation parameters (such as the subset size, the grid step, and the strain window size) due to under-matched shape functions (for displacement measurement) and surface fitting functions (for strain calculation). To evaluate the systematic errors in local deformations, theoretical estimations and approximations of displacement and strain systematic errors have been deduced when the first-order shape functions and quadric surface fitting functions are employed. The following results come out: (1) the approximate displacement systematic errors are proportional to the second-order displacement gradients and the ratio is only determined by the subset size; (2) the approximate strain systematic errors are functions of the third-order displacement gradients and the coefficients are dependent on the subset size, the grid step and the strain window size. Simulated experiments have been carried out to verify the reliability. Besides, a convenient way by comparing displacement results measured by the DIC method with different subset sizes is proposed to approximately evaluate the displacement systematic errors.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.
Natarajan, Loki; Flatt, Shirley W; Sun, Xiaoying; Gamst, Anthony C; Major, Jacqueline M; Rock, Cheryl L; Al-Delaimy, Wael; Thomson, Cynthia A; Newman, Vicky A; Pierce, John P
2006-04-15
Vegetables and fruits are rich in carotenoids, a group of compounds thought to protect against cancer. Studies of diet-disease associations need valid and reliable instruments for measuring dietary intake. The authors present a measurement error model to estimate the validity (defined as correlation between self-reported intake and "true" intake), systematic error, and reliability of two self-report dietary assessment methods. Carotenoid exposure is measured by repeated 24-hour recalls, a food frequency questionnaire (FFQ), and a plasma marker. The model is applied to 1,013 participants assigned between 1995 and 2000 to the nonintervention arm of the Women's Healthy Eating and Living Study, a randomized trial assessing the impact of a low-fat, high-vegetable/fruit/fiber diet on preventing new breast cancer events. Diagnostics including graphs are used to assess the goodness of fit. The validity of the instruments was 0.44 for the 24-hour recalls and 0.39 for the FFQ. Systematic error accounted for over 22% and 50% of measurement error variance for the 24-hour recalls and FFQ, respectively. The use of either self-report method alone in diet-disease studies could lead to substantial bias and error. Multiple methods of dietary assessment may provide more accurate estimates of true dietary intake.
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D; Szpiro, Adam A
2016-11-01
Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Study on position error of fiber positioning measurement system for LAMOST
NASA Astrophysics Data System (ADS)
Jin, Yi; Zhai, Chao; Xing, Xiaozheng; Teng, Yong; Hu, Hongzhuan
2006-06-01
An investigation on measuring precision of the measurement system is carried on, which is applied to optical fiber positioning system for LAMOST. In the fiber positioning system, geometrical coordinates of fibers need to be measured in order to verify the precision of fiber positioning and it is one of the most pivotal problems. The measurement system consists of an area CCD sensor, an image acquisition card, a lens and a computer. Temperature, vibration, lens aberration and CCD itself will probably cause measuring error. As fiber positioning is a dynamic process and fibers are reversing, this will make additional error. The paper focuses on analyzing the influence to measuring precision which is made by different status of fibers. The fibers are stuck to keep the relative positions steady which can rotate around the same point. The distances between fibers are measured under different experimental conditions. Then the influence of fibers' status can be obtained from the change of distances. Influence to position error made by different factors is analyzed according to the theory and experiments. Position error would be decreased by changing a lens aperture setting and polishing fibers.
Correction of error in two-dimensional wear measurements of cemented hip arthroplasties.
The, Bertram; Mol, Linda; Diercks, Ron L; van Ooijen, Peter M A; Verdonschot, Nico
2006-01-01
The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We developed and tested a method to correct for this source of error. The influence of patient position on validity of wear measurements was investigated with controlled manipulation of a cadaveric pelvis. Without correction, the error exceeded 0.2 mm if differences in cup projection were as small as 5 degrees. When using the described correction method, cup positioning differences could be greater than 20 degrees before introducing an error exceeding 0.2 mm. For followup of patients in clinical practice, we recommend using the correction method to enhance accuracy of the results.
Sideslip-induced static pressure errors in flight-test measurements
NASA Technical Reports Server (NTRS)
Parks, Edwin K.; Bach, Ralph E., Jr.; Tran, Duc
1990-01-01
During lateral flight-test maneuvers of a V/STOL research aircraft, large errors in static pressure were observed. An investigation of the data showed a strong correlation of the pressure record with variations in sideslip angle. The sensors for both measurements were located on a standard air-data nose boom. An algorithm based on potential flow over a cylinder that was developed to correct the pressure record for sideslip-induced errors is described. In order to properly apply the correction algorithm, it was necessary to estimate and correct the lag error in the pressure system. The method developed for estimating pressure lag is based on the coupling of sideslip activity into the static ports and can be used as a standard flight-test procedure. The estimation procedure is discussed and the corrected static-pressure record for a typical lateral maneuver is presented. It is shown that application of the correction algorithm effectively attenuates sideslip-induced errors.
Error correction with machine learning: one man's syndrome measurement is another man's treasure
NASA Astrophysics Data System (ADS)
Combes, Joshua; Briegel, Hans; Caves, Carlton; Cesare, Christopher; Ferrie, Christopher; Milburn, Gerard; Tiersch, Markus
2014-03-01
Syndrome measurements that are made in quantum error correction contains more information than is typically used. We show using the data from syndrome measurements (that one has to do anyway) the following: (1) a channel can be dynamically estimated; (2) in some situations the information gathered from the estimation can be used to permanently correct away part of the channel; and (3) can allow us to perform hypothesis testing to determine if the errors are correlated or if the error rate exceeds the ``expected worst case''. The unifying theme to these topics is making use of all of the information in the data collected from syndrome measurements with a machine learning and control algorithms.
Differential correction technique for removing common errors in gas filter radiometer measurements
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Chan, Caroline C.; Gormsen, Barbara B.; Reichle, Henry G., Jr.
1992-01-01
The Measurement of Air Pollution from Satellites (MAPS) gas filter radiometer experiment was designed to measure CO mixing ratios in the earth's atmosphere. MAPS also measures N2O to provide a reference channel for the atmospheric emitting temperature and to detect the presence of clouds. In this paper we formulate equations to correct the radiometric signals based on the spatial and temporal uniformity of the N2O mixing ratio in the atmosphere. Results of an error study demonstrate that these equations reduce the error in inferred CO mixing ratios. Subsequent application of the technique to the MAPS 1984 data set decreases the error in the frequency distribution of mixing ratios and increases the number of usable data points.
An Empirical Study for Impacts of Measurement Errors on EHR based Association Studies
Duan, Rui; Cao, Ming; Wu, Yonghui; Huang, Jing; Denny, Joshua C; Xu, Hua; Chen, Yong
2016-01-01
Over the last decade, Electronic Health Records (EHR) systems have been increasingly implemented at US hospitals. Despite their great potential, the complex and uneven nature of clinical documentation and data quality brings additional challenges for analyzing EHR data. A critical challenge is the information bias due to the measurement errors in outcome and covariates. We conducted empirical studies to quantify the impacts of the information bias on association study. Specifically, we designed our simulation studies based on the characteristics of the Electronic Medical Records and Genomics (eMERGE) Network. Through simulation studies, we quantified the loss of power due to misclassifications in case ascertainment and measurement errors in covariate status extraction, with respect to different levels of misclassification rates, disease prevalence, and covariate frequencies. These empirical findings can inform investigators for better understanding of the potential power loss due to misclassification and measurement errors under a variety of conditions in EHR based association studies. PMID:28269935
Yi, G Y; Liu, W; Wu, Lang
2011-03-01
Longitudinal data arise frequently in medical studies and it is common practice to analyze such data with generalized linear mixed models. Such models enable us to account for various types of heterogeneity, including between- and within-subjects ones. Inferential procedures complicate dramatically when missing observations or measurement error arise. In the literature, there has been considerable interest in accommodating either incompleteness or covariate measurement error under random effects models. However, there is relatively little work concerning both features simultaneously. There is a need to fill up this gap as longitudinal data do often have both characteristics. In this article, our objectives are to study simultaneous impact of missingness and covariate measurement error on inferential procedures and to develop a valid method that is both computationally feasible and theoretically valid. Simulation studies are conducted to assess the performance of the proposed method, and a real example is analyzed with the proposed method.
Linear Increments with Non-monotone Missing Data and Measurement Error.
Seaman, Shaun R; Farewell, Daniel; White, Ian R
2016-12-01
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non-monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non-monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non-MAR and non-normal) assumptions not much stronger than those of LI. Moreover, when missingness is non-monotone, they are typically more efficient.
Measurement error models in chemical mass balance analysis of air quality data
NASA Astrophysics Data System (ADS)
Christensen, William F.; Gunst, Richard F.
The chemical mass balance (CMB) equations have been used to apportion observed pollutant concentrations to their various pollution sources. Typical analyses incorporate estimated pollution source profiles, estimated source profile error variances, and error variances associated with the ambient measurement process. Often the CMB model is fit to the data using an iteratively re-weighted least-squares algorithm to obtain the effective variance solution. We consider the chemical mass balance model within the framework of the statistical measurement error model (e.g., Fuller, W.A., Measurement Error Models, Wiley, NewYork, 1987), and we illustrate that the models assumed by each of the approaches to the CMB equations are in fact special cases of a general measurement error model. We compare alternative source contribution estimators with the commonly used effective variance estimator when standard assumptions are valid and when such assumptions are violated. Four approaches for source contribution estimation and inference are compared using computer simulation: weighted least squares (with standard errors adjusted for source profile error), the effective variance approach of Watson et al. (Atmos, Environ., 18, 1984, 1347), the Britt and Luecke (Technometrics, 15, 1973, 233) approach, and a method of moments approach given in Fuller (1987, p. 193). For the scenarios we consider, the simplistic weighted least-squares approach performs as well as the more widely used effective variance solution in most cases, and is slightly superior to the effective variance solution when source profile variability is large. The four estimation approaches are illustrated using real PM 2.5 data from Fresno and the conclusions drawn from the computer simulation are validated.
NASA Astrophysics Data System (ADS)
Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng
2016-10-01
The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.
Importance of regression processes in evaluating analytical errors in argon isotope measurements
NASA Astrophysics Data System (ADS)
Min, K.; Powell, L.
2003-04-01
For 40Ar/39Ar dating, it is required to measure five argon isotopes of 36Ar ~ 40Ar with high precision. The process involves isolating the purified gas in an analytical volume and cyclically measuring the abundance of each Ar isotope using an electron multiplier to minimize detector calibration and sensitivity errors. Each cycle is composed of maximum several tens of fundamental digital voltmeter (DVM) readings per isotope. Since the abundance of each isotope varies over analytical time, it is necessary to statistically treat the data to obtain most probable estimates. The readings on one mass from one cycle are commonly averaged to be treated as a single data point for regression. The y-intercept derived from the regression is assumed to represent an initial isotopic abundance at the time (t0) when the gas was introduced to the analytical volume. This procedure is repeated for each Ar isotope. About 0.2 % precision is often claimed for 40Ar and 39Ar measurements for properly irradiated, K-rich samples. The uncertainty of the calculated y-intercept varies depending on the distribution of the averaged DVM readings as well as the model equation used in regression. The “internal error” associated with the distribution of individual DVM readings in the group average are, however, commonly ignored in the regression procedure probably due to complex weighting processes. Including the internal error may significantly increase the uncertainties of 40Ar/39Ar ages especially for young samples because the analytical errors (from isotopic ratio measurements) are more dominant than the systematic errors (from decay constant, age of neutron flux monitor, etc). Alternative way to include the internal error is to regress all of the DVM readings with a single equation, then propagate the regression error into y-intercept calculation. In any case, it is necessary to propagate uncertainties derived from fundamental readings to properly estimate analytical errors in 40Ar/39Ar age
Miller, Audrey K; Rufino, Katrina A; Boccaccini, Marcus T; Jackson, Rebecca L; Murrie, Daniel C
2011-06-01
This study investigated raters' personality traits in relation to scores they assigned to offenders using the Psychopathy Checklist-Revised (PCL-R). A total of 22 participants, including graduate students and faculty members in clinical psychology programs, completed a PCL-R training session, independently scored four criminal offenders using the PCL-R, and completed a comprehensive measure of their own personality traits. A priori hypotheses specified that raters' personality traits, and their similarity to psychopathy characteristics, would relate to raters' PCL-R scoring tendencies. As hypothesized, some raters assigned consistently higher scores on the PCL-R than others, especially on PCL-R Facets 1 and 2. Also as hypothesized, raters' scoring tendencies related to their own personality traits (e.g., higher rater Agreeableness was associated with lower PCL-R Interpersonal facet scoring). Overall, findings underscore the need for future research to examine the role of evaluator characteristics on evaluation results and the need for clinical training to address evaluators' personality influences on their ostensibly objective evaluations.
NASA Technical Reports Server (NTRS)
Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.; Theriault, J. M.
1991-01-01
The purpose of this paper is to demonstrate the trace gas profiling capabilities of future passive high spectral resolution (1 cm(exp -1) or better) infrared (600 to 2700 cm(exp -1)) satellite tropospheric sounders. These sounders, such as the grating spectrometer, Atmospheric InfRared Sounders (AIRS) (Chahine et al., 1990) and the interferometer, GOES High Resolution Interferometer Sounder (GHIS), (Smith et al., 1991) can provide these unique infrared spectra which enable us to conduct this analysis. In this calculation only the total random retrieval error component is presented. The systematic error components contributed by the forward and inverse model error are not considered (subject of further studies). The total random errors, which are composed of null space error (vertical resolution component error) and measurement error (instrument noise component error), are computed by assuming one wavenumber spectral resolution with wavenumber span from 1100 cm(exp -1) to 2300 cm(exp -1) (the band 600 cm(exp -1) to 1100 cm(exp -1) is not used since there is no major absorption of our three gases here) and measurement noise of 0.25 degree at reference temperature of 260 degree K. Temperature, water vapor, ozone and mixing ratio profiles of nitrous oxide, carbon monoxide and methane are taken from 1976 US Standard Atmosphere conditions (a FASCODE model). Covariance matrices of the gases are 'subjectively' generated by assuming 50 percent standard deviation of gaussian perturbation with respect to their US Standard model profiles. Minimum information and maximum likelihood retrieval solutions are used.
Martin, D.L.
1992-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.
Measurement Errors in Microbial Water Quality Assessement: the Case of Bacterial Aggregates
NASA Astrophysics Data System (ADS)
Plancherel, Y.; Cowen, J. P.
2004-12-01
The quantification of the risk of illness for swimmers, bathers, or consumers exposed to a polluted water body involves the measurement of microbial indicator organism densities. Depending on the organism targeted, there exist two widely used (traditional) techniques for their enumeration: most probable number (MPN) and membrane filtration (MF). Estimation of indicator organism density by these traditional methods is subject to large measurement error, which translates into poorly constrained relationships between indicator organism density and illness rate. Neither the MPN nor the MF method can discriminate multiple cells that form an aggregate. Mathematical formulations and computer simulations are used to investigate the effects that bacterial clumps have on the measurement error of the concentrations. The first case considered is that of the formation of clusters induced during the membrane filtration process assuming a randomly distributed population of cells growing into colonies. The computer simulations indicate that this process induces a typical measurement error <15% with the MF method. Replication of the MF measurements does not reduce this type of error. The second case describes a mathematical framework for the modeling of particle-associated bacteria. When aggregates harboring bacteria are present in a sample, an additional measurement error of 5-35% is expected. Empirical results from laboratory and field experiments enumerating aggregated bacteria using the MF method agree well with these model values. Furthermore, the data reveal that this type of error depends on the microbial indicators used (Enterococcus, C. perfringens, Heterotrophic Plate Count bacteria) and highlights the importance of small bacterial clusters (<5 μ m).
Schomaker, Michael; Hogger, Sara; Johnson, Leigh F.; Hoffmann, Christopher J.; Bärnighausen, Till; Heumann, Christian
2015-01-01
Background Both CD4 count and viral load in HIV infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. Methods We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of lnCD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Results Multiple overimputation emphasizes more strongly the influence of having a high baseline CD4 counts compared to a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm3 vs. <25 cells/mm3: 0.21 [95%CI: 0.18;0.24] vs. 0.38 [0.29;0.48] and 0.29 [0.25;0.34] respectively). Similar results are obtained when varying assumptions about the measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Conclusions Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research. PMID:26214336
ERIC Educational Resources Information Center
Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik
2015-01-01
The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…
On the impact of covariate measurement error on spatial regression modelling
Huque, Md Hamidul; Bondell, Howard; Ryan, Louise
2015-01-01
Summary Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD). PMID:25729267
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also
Quantitative analyses of spectral measurement error based on Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin
2015-03-01
The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.
Individual Feedback to Enhance Rater Training: Does It Work?
ERIC Educational Resources Information Center
Elder, Cathie; Knoch, Ute; Barkhuizen, Gary; von Randow, Janet
2005-01-01
Research on the utility of feedback to raters in the form of performance reports has produced mixed findings (Lunt, Morton, & Wigglesworth, 1994; Wigglesworth, 1993) and has thus far been trialled only in oral assessment contexts. This article reports on a study investigating raters' attitudes and responsiveness to feedback on their ratings of…
Training Raters to Assess Adult ADHD: Reliability of Ratings
ERIC Educational Resources Information Center
Adler, Lenard A.; Spencer, Thomas; Faraone, Stephen V.; Reimherr, Fred W.; Kelsey, Douglas; Michelson, David; Biederman, Joseph
2005-01-01
The standardization of ADHD ratings in adults is important given their differing symptom presentation. The authors investigated the agreement and reliability of rater standardization in a large-scale trial of atomoxetine in adults with ADHD. Training of 91 raters for the investigator-administered ADHD Rating Scale (ADHDRS-IV-Inv) occurred prior to…
Rater Strategies for Reaching Agreement on Pupil Text Quality
ERIC Educational Resources Information Center
Jølle, Lennart
2015-01-01
Novice members of a Norwegian national rater panel tasked with assessing Year 8 pupils' written texts were studied during three successive preparation sessions (2011-2012). The purpose was to investigate how the raters successfully make use of different decision-making strategies in an assessment situation where pre-set criteria and standards give…
Liu, Shi Qiang; Zhu, Rong
2016-01-29
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314
Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals
2014-08-01
Exponential decay of reconstruction error from binary measurements of sparse signals Richard Baraniukr, Simon Foucartg, Deanna Needellc, Yaniv Planb...Church Street, Ann Arbor, MI 48109, USA. Email: wootters@umich.edu August 1, 2014 Abstract Binary measurements arise naturally in a variety of...greatly improve the ability to reconstruct a signal from binary measurements. This is exemplified by one- bit compressed sensing, which takes the
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
NASA Astrophysics Data System (ADS)
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
Improved modeling of multivariate measurement errors based on the Wishart distribution.
Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M
2017-03-22
The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.
Observation of spectrum effect on the measurement of intrinsic error field on EAST
NASA Astrophysics Data System (ADS)
Wang, Hui-Hui; Sun, You-Wen; Qian, Jin-Ping; Shi, Tong-Hui; Shen, Biao; Gu, Shuai; Liu, Yue-Qiang; Guo, Wen-Feng; Chu, Nan; He, Kai-Yang; Jia, Man-Ni; Chen, Da-Long; Xue, Min-Min; Ren, Jie; Wang, Yong; Sheng, Zhi-Cai; Xiao, Bing-Jia; Luo, Zheng-Ping; Liu, Yong; Liu, Hai-Qing; Zhao, Hai-Lin; Zeng, Long; Gong, Xian-Zu; Liang, Yun-Feng; Wan, Bao-Nian; The EAST Team
2016-06-01
Intrinsic error field on EAST is measured using the ‘compass scan’ technique with different n = 1 magnetic perturbation coil configurations in ohmically heated discharges. The intrinsic error field measured using a non-resonant dominated spectrum with even connection of the upper and lower resonant magnetic perturbation coils is of the order {{b}r2,1}/{{B}\\text{T}}≃ {{10}-5} and the toroidal phase of intrinsic error field is around {{60}{^\\circ}} . A clear difference between the results using the two coil configurations, resonant and non-resonant dominated spectra, is observed. The ‘resonant’ and ‘non-resonant’ terminology is based on vacuum modeling. The penetration thresholds of the non-resonant dominated cases are much smaller than that of the resonant cases. The difference of penetration thresholds between the resonant and non-resonant cases is reduced by plasma response modeling using the MARS-F code.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.
Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal
2016-05-01
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.
Barshan, Billur
2008-12-15
An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.
Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T
2014-10-20
We report new methods for retrieving atmospheric constituents from symmetrically-measured lidar-sounding absorption spectra. The forward model accounts for laser line-center frequency noise and broadened line-shape, and is essentially linearized by linking estimated optical-depths to the mixing ratios. Errors from the spectral distortion and laser frequency drift are substantially reduced by averaging optical-depths at each pair of symmetric wavelength channels. Retrieval errors from measurement noise and model bias are analyzed parametrically and numerically for multiple atmospheric layers, to provide deeper insight. Errors from surface height and reflectance variations are reduced to tolerable levels by "averaging before log" with pulse-by-pulse ranging knowledge incorporated.
Irradiance measurement errors due to the assumption of a Lambertian reference panel
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Kirchner, J. A.
1982-01-01
A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.; Griffin, John C.
2015-07-01
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
Jin, Tao; Ji, Hudong; Hou, Wenmei; Le, Yanfen; Shen, Lu
2017-01-20
This paper presents an enhanced differential plane mirror interferometer with high resolution for measuring straightness. Two sets of space symmetrical beams are used to travel through the measurement and reference arms of the straightness interferometer, which contains three specific optical devices: a Koster prism, a wedge prism assembly, and a wedge mirror assembly. Changes in the optical path in the interferometer arms caused by straightness are differential and converted into phase shift through a particular interferometer system. The interferometric beams have a completely common path and space symmetrical measurement structure. The crosstalk of the Abbe error caused by pitch, yaw, and roll angle is avoided. The dead path error is minimized, which greatly enhances the stability and accuracy of the measurement. A measurement resolution of 17.5 nm is achieved. The experimental results fit well with the theoretical analysis.
Hjarbaek, John; Eshoej, Henrik; Larsen, Camilla Marie; Vobbe, Jette; Juul-Kristensen, Birgit
2016-01-01
Aim To evaluate the inter-rater reliability of measuring structural changes in the tendon of patients, clinically diagnosed with supraspinatus tendinopathy (cases) and healthy participants (controls), on ultrasound (US) images captured by standardised procedures. Methods A total of 40 participants (24 patients) were included for assessing inter-rater reliability of measurements of fibrillar disruption, neovascularity, as well as the number and total length of calcifications and tendon thickness. Linear weighted κ, intraclass correlation (ICC), SEM, limits of agreement (LOA) and minimal detectable change (MDC) were used to evaluate reliability. Results ‘Moderate—almost perfect’ κ was found for grading fibrillar disruption, neovascularity and number of calcifications (k 0.60–0.96). For total length of calcifications and tendon thickness, ICC was ‘excellent’ (0.85–0.90), with SEM(Agreement) ranging from 0.63 to 2.94 mm and MDC(group) ranging from 0.28 to 1.29 mm. In general, SEM, LOA and MDC showed larger variation for calcifications than for tendon thickness. Conclusions Inter-rater reliability was moderate to almost perfect when a standardised procedure was applied for measuring structural changes on captured US images and movie sequences of relevance for patients with supraspinatus tendinopathy. Future studies should test intra-rater and inter-rater reliability of the method in vivo for use in clinical practice, in addition to validation against a gold standard, such as MRI. Trial registration number NCT01984203; Pre-results. PMID:27221128
Intra-rater variability in low-grade glioma segmentation.
Bø, Hans Kristian; Solheim, Ole; Jakola, Asgeir Store; Kvistad, Kjell-Arne; Reinertsen, Ingerid; Berntsen, Erik Magnus
2017-01-01
Assessment of size and growth are key radiological factors in low-grade gliomas (LGGs), both for prognostication and treatment evaluation, but the reliability of LGG-segmentation is scarcely studied. With a diffuse and invasive growth pattern, usually without contrast enhancement, these tumors can be difficult to delineate. The aim of this study was to investigate the intra-observer variability in LGG-segmentation for a radiologist without prior segmentation experience. Pre-operative 3D FLAIR images of 23 LGGs were segmented three times in the software 3D Slicer. Tumor volumes were calculated, together with the absolute and relative difference between the segmentations. To quantify the intra-rater variability, we used the Jaccard coefficient comparing both two (J2) and three (J3) segmentations as well as the Hausdorff Distance (HD). The variability measured with J2 improved significantly between the two last segmentations compared to the two first, going from 0.87 to 0.90 (p = 0.04). Between the last two segmentations, larger tumors showed a tendency towards smaller relative volume difference (p = 0.07), while tumors with well-defined borders had significantly less variability measured with both J2 (p = 0.04) and HD (p < 0.01). We found no significant relationship between variability and histological sub-types or Apparent Diffusion Coefficients (ADC). We found that the intra-rater variability can be considerable in serial LGG-segmentation, but the variability seems to decrease with experience and higher grade of border conspicuity. Our findings highlight that some criteria defining tumor borders and progression in 3D volumetric segmentation is needed, if moving from 2D to 3D assessment of size and growth of LGGs.
Inter-rater Reliability Assessment of ASPECT-R
Bossie, Cynthia A.; Williamson, David; Mao, Lian; Kurut, Clennon
2016-01-01
Objective: The increasing importance of real-world data for clinical and policy decision making is driving a need for close attention to the pragmatic versus explanatory features of trial designs. ASPECT-R (A Study Pragmatic-Explanatory Characterization Tool-Rating) is an instrument informed by the PRECIS tool, which was developed to assist researchers in designing trials that are more pragmatic or explanatory. ASPECT-R refined the PRECIS domains and includes a detailed anchored rating system. This analysis established the inter-rater reliability of ASPECT-R. Design: Nine raters (identified from a convenience sample of persons knowledgeable about psychiatry clinical research/study design) received ASPECT-R training materials and 12 study publications. Selected studies assessed antipsychotic treatment in schizophrenia, were published in peer-reviewed journals, and represented a range of studies across a pragmatic-explanatory continuum as determined by authors (CB/LA). After completing training, raters reviewed the 12 studies and rated the study domains using ASPECT-R. Intraclass correlation coefficients were estimated for total and domain scores. Qualitative ratings then were assigned to describe the inter-rater reliability. Results: ASPECT-R scores for the 12 studies were completed by seven raters. The ASPECT-R total score intraclass correlation coefficient was 0.87, corresponding to an excellent inter-rater reliability. Domain intraclass correlation coefficients ranged from 0.85 to 0.31, corresponding to excellent to poor inter-rater reliability. Conclusion: The inter-rater reliability of the ASPECT-R total score was excellent, with excellent to good inter-rater reliability for most domains. The fair to poor inter-rater reliability for two domains may reflect a need for improved domain definition, anchoring, or training materials. ASPECT-R can be used to help understand the pragmaticexplanatory nature of completed or planned trials. PMID:27354926
NASA Astrophysics Data System (ADS)
Rowan, Olga K.; Keil, Gary D.; Clements, Tom E.
2014-12-01
Hardened depth (effective case depth) measurement is one of the most commonly used methods for carburizing performance evaluation. Variation in direct hardened depth measurements is routinely assumed to represent the heat treat process variation without properly correcting for the large uncertainty frequently observed in industrial laboratory measurements. These measurement uncertainties may also invalidate application of statistical control requirements on hardened depth. Gage R&R studies were conducted at three different laboratories on shallow and deep case carburized components. The primary objectives were to understand the magnitude of the measurement uncertainty and heat treat process variability, and to evaluate practical applicability of statistical control methods to metallurgical quality assessment. It was found that ~75% of the overall hardened depth variation is attributed to the measurement error resulting from the accuracy limitation of microhardness equipment and the linear interpolation technique. The measurement error was found to be proportional to the hardened depth magnitude and may reach ~0.2 mm uncertainty at 1.3 mm nominal depth and ~0.8 mm uncertainty at 3.2mm depth. A case study was discussed to explain a methodology for analyzing a large body of hardened depth information, determination of the measurement error, and calculation of the true heat treat process variation.
Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna
2015-05-01
Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.
A field calibration method to eliminate the error caused by relative tilt on roll angle measurement
NASA Astrophysics Data System (ADS)
Qi, Jingya; Wang, Zhao; Huang, Junhui; Yu, Bao; Gao, Jianmin
2016-11-01
The roll angle measurement method based on a heterodyne interferometer is an efficient technique for its high precision and environmental noise immunity. The optical layout bases on a polarization-assisted conversion of the roll angle into an optical phase shift, read by a beam passing through the objective plate actuated by the roll rotation. The measurement sensitivity or the gain coefficient G is calibrated before. However, a relative tilt between the laser and objective plate always exist due to the tilt of the laser and the roll of the guide in the field long rail measurement. The relative tilt affect the value of G, thus result in the roll angle measurement error. In this paper, a method for field calibration of G is presented to eliminate the measurement error above. The field calibration layout turns the roll angle into an optical path change (OPC) by a rotary table. Thus, the roll angle can be obtained from the OPC read by a two-frequency interferometer. Together with the phase shift, an accurate G in field measurement can be obtained and the measurement error can be corrected. The optical system of the field calibration method is set up and the experiment results are given. Contrasted with the Renishaw XL-80 for calibration, the proposed field calibration method can obtain the accurate G in the field rail roll angle measurement.
Effect of sampling variation on error of rainfall variables measured by optical disdrometer
NASA Astrophysics Data System (ADS)
Liu, X. C.; Gao, T. C.; Liu, L.
2012-12-01
During the sampling process of precipitation particles by optical disdrometers, the randomness of particles and sampling variability has great impact on the accuracy of precipitation variables. Based on a marked point model of raindrop size distribution, the effect of sampling variation on drop size distribution and velocity distribution measurement using optical disdrometers are analyzed by Monte Carlo simulation. The results show that the samples number, rain rate, drop size distribution, and sampling size have different influences on the accuracy of rainfall variables. The relative errors of rainfall variables caused by sampling variation in a descending order as: water concentration, mean diameter, mass weighed mean diameter, mean volume diameter, radar reflectivity factor, and number density, which are independent with samples number basically; the relative error of rain variables are positively correlated with the margin probability, which is also positively correlated with the rain rate and the mean diameter of raindrops; the sampling size is one of the main factors that influence the margin probability, with the decreasing of sampling area, especially the decreasing of short side of sample size, the probability of margin raindrops is getting greater, hence the error of rain variables are getting greater, and the variables of median size raindrops have the maximum error. To ensure the relative error of rainfall variables measured by optical disdrometer less than 1%, the width of light beam should be at least 40 mm.
Measurement error of 3D cranial landmarks of an ontogenetic sample using Computed Tomography
Barbeito-Andrés, Jimena; Anzelmo, Marisol; Ventrice, Fernando; Sardi, Marina L.
2012-01-01
Background/Aim Computed Tomography (CT) is a powerful tool in craniofacial research that focuses on morphological variation. In this field, an ontogenetic approach has been taken to study the developmental sources of variation and to understand the basis of morphological evolution. This work aimed to determine measurement error (ME) in cranial CT in diverse developmental stages and to characterize how this error relates to different types of landmarks. Material and methods We used a sample of fifteen skulls ranging from 0 to 31 years. Two observers placed landmarks in each image three times. Measurement error was assessed before and after Generalized Procrustes Analysis. Results The results indicated that ME is larger in neurocranial structures, which are described mainly by type III landmarks and semilandmarks. In addition, adult and infant specimens showed the same level of ME. These results are specially relevant in the context of craniofacial growth research. Conclusion CT images have become a frequent evidence to study cranial variation. Evaluation of ME gives insight into the potential source of error in interpreting results. Neural structures present higher ME which is mainly associated to landmark localization. However, this error is irrespective of age. If landmarks are correctly selected, they can be analyzed with the same level of reliability in adults and subadults. PMID:25737840
Results of error correction techniques applied on two high accuracy coordinate measuring machines
Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )
1990-01-01
The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.
Study of angle measuring error mechanism caused by rotor run-outs
NASA Astrophysics Data System (ADS)
Lao, Dabao; Zhang, Wenying; Zhou, Weihu
2016-11-01
In a rotating angle measuring system, errors of grating sensor, installation and rotor run-outs will affect angle measuring error. The error caused by rotor run-outs is usually the biggest and the hardest to eliminate of them. To improve the accuracy, the table should be fabricated precisely, thus, the table system will be complicated and expensive. This paper provides a method to solve the challenge by using two gratings in the same table, whose gratings respectively grooved on end face and side face. The error mechanism of end face and side face caused by axial and radial rotor run-outs by were deduced. It can be concluded from the analysis that end face grating is sensitive when radial rotor run-outs happens, side face grating is sensitive when axial rotor run-outs happens. Due to the conclusion, combined type gratings with one end face grating and one side face grating can be used to restrain the error caused by Rotor Run-outs of table.
ERIC Educational Resources Information Center
Charles, Eric P.
2005-01-01
The correction for attenuation due to measurement error (CAME) has received many historical criticisms, most of which can be traced to the limited ability to use CAME inferentially. Past attempts to determine confidence intervals for CAME are summarized and their limitations discussed. The author suggests that inference requires confidence sets…
ERIC Educational Resources Information Center
Tan Sisman, Gulcin; Aksu, Meral
2016-01-01
The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…
The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.
2015-01-01
Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.
ERIC Educational Resources Information Center
Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.
2001-01-01
Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…
Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure
ERIC Educational Resources Information Center
Padilla, Miguel A.; Veprinsky, Anna
2012-01-01
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
Sensitivity of Force Specifications to the Errors in Measuring the Interface Force
NASA Technical Reports Server (NTRS)
Worth, Daniel
2000-01-01
Force-Limited Random Vibration Testing has been applied in the last several years at the NASA Goddard Space Flight Center (GSFC) and other NASA centers for various programs at the instrument and spacecraft level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the flight environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. This paper will show the effects of some measurement and calibration errors in force gauges. In some cases, the notches in the acceleration spectrum when a random vibration test is performed with measurement errors are the same as the notches produced during a test that has no measurement errors. The paper will also present the results Of tests that were used to validate this effect. Knowing the effect of measurement errors can allow tests to continue after force gauge failures or allow dummy gauges to be used in places that are inaccessible to a force gage.
Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.
ERIC Educational Resources Information Center
Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.
The Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) was submitted to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and to identify possible study characteristics that are predictive of reliability variation. The meta-analysis was performed…
Mixture of normal distributions in multivariate null intercept measurement error model.
Aoki, Reiko; Pinto Júnior, Dorival Leão; Achcar, Jorge Alberto; Bolfarine, Heleno
2006-01-01
In this paper we propose the use of a multivariate null intercept measurement error model, where the true unobserved value of the covariate follows a mixture of two normal distributions. The proposed model is applied to a dental clinical trial presented in Hadgu and Koch (1999). A Bayesian approach is considered and a Gibbs Sampler is used to perform the computations.
Exploring Type I and Type II Errors Using Rhizopus Sporangia Diameter Measurements.
ERIC Educational Resources Information Center
Smith, Robert A.; Burns, Gerard; Freud, Brian; Fenning, Stacy; Hoffman, Rosemary; Sabapathi, Durai
2000-01-01
Presents exercises in which students can explore Type I and Type II errors using sporangia diameter measurements as a means of differentiating between two species. Examines the influence of sample size and significance level on the outcome of the analysis. (SAH)
Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis
ERIC Educational Resources Information Center
Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara
2014-01-01
This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…
Covariate Measurement Error Correction for Student Growth Percentiles Using the SIMEX Method
ERIC Educational Resources Information Center
Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W.
2015-01-01
In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…
Technology Transfer Automated Retrieval System (TEKTRAN)
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm.h-1 to 250 mm.h-1) and three di...
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the
Self-Test Web-Based Pure-Tone Audiometry: Validity Evaluation and Measurement Error Analysis
Kręcicki, Tomasz
2013-01-01
Background Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. Objective The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. Methods The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). Results The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). Conclusions The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application. PMID:23583917
ERIC Educational Resources Information Center
Harshman, Jordan; Yezierski, Ellen
2016-01-01
Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
Measurement of 2∕1 intrinsic error field of Joint TEXT tokamak.
Rao, B; Ding, Y H; Yu, K X; Jin, W; Hu, Q M; Yi, B; Nan, J Y; Wang, N C; Zhang, M; Zhuang, G
2013-04-01
The amplitude and spatial phase of the intrinsic error field of Joint TEXT (J-TEXT) tokamak were measured by scanning the spatial phase of an externally exerted resonant magnetic perturbation and fitting the mode locking thresholds. For a typical plasma with current of 180 kA, the amplitude of the 2∕1 component of the error field at the plasma edge is measured to be 0.31 G, which is about 1.8 × 10(-5) relative to the base toroidal field. The measured spatial phase is about 317° in the specified coordinate system (r, θ, ϕ) of J-TEXT tokamak. An analytical model based on the dynamics of rotating island is developed to verify the measured phase.
Measurement of 2/1 intrinsic error field of Joint TEXT tokamak
NASA Astrophysics Data System (ADS)
Rao, B.; Ding, Y. H.; Yu, K. X.; Jin, W.; Hu, Q. M.; Yi, B.; Nan, J. Y.; Wang, N. C.; Zhang, M.; Zhuang, G.
2013-04-01
The amplitude and spatial phase of the intrinsic error field of Joint TEXT (J-TEXT) tokamak were measured by scanning the spatial phase of an externally exerted resonant magnetic perturbation and fitting the mode locking thresholds. For a typical plasma with current of 180 kA, the amplitude of the 2/1 component of the error field at the plasma edge is measured to be 0.31 G, which is about 1.8 × 10-5 relative to the base toroidal field. The measured spatial phase is about 317° in the specified coordinate system (r, θ, φ) of J-TEXT tokamak. An analytical model based on the dynamics of rotating island is developed to verify the measured phase.
Gilchrist, Michael A.; Shah, Premal; Zaretzki, Russell
2009-01-01
Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy. PMID:19822731
Bit error rate measurement above and below bit rate tracking threshold
NASA Technical Reports Server (NTRS)
Kobayaski, H. S.; Fowler, J.; Kurple, W. (Inventor)
1978-01-01
Bit error rate is measured by sending a pseudo-random noise (PRN) code test signal simulating digital data through digital equipment to be tested. An incoming signal representing the response of the equipment being tested, together with any added noise, is received and tracked by being compared with a locally generated PRN code. Once the locally generated PRN code matches the incoming signal a tracking lock is obtained. The incoming signal is then integrated and compared bit-by-bit against the locally generated PRN code and differences between bits being compared are counted as bit errors.
NASA Technical Reports Server (NTRS)
Merhav, S.; Velger, M.
1991-01-01
A method based on complementary filtering is shown to be effective in compensating for the image stabilization error due to sampling delays of HMD position and orientation measurements. These delays would otherwise have prevented the stabilization of the image in HMDs. The method is also shown to improve the resolution of the head orientation measurement, particularly at low frequencies, thus providing smoother head control commands, which are essential for precise head pointing and teleoperation.
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Automated quantitative measurements and associated error covariances for planetary image analysis
NASA Astrophysics Data System (ADS)
Tar, P. D.; Thacker, N. A.; Gilmour, J. D.; Jones, M. A.
2015-07-01
This paper presents a flexible approach for extracting measurements from planetary images based upon the newly developed linear Poisson models technique. The approach has the ability to learn surface textures then estimate the quantity of terrains exhibiting similar textures in new images. This approach is suitable for the estimation of dune field coverage or other repeating structures. Whilst other approaches exist, this method is unique for its incorporation of a comprehensive error theory, which includes contributions to uncertainty arising from training and subsequent use. The error theory is capable of producing measurement error covariances, which are essential for the scientific interpretation of measurements, i.e. for the plotting of error bars. In order to apply linear Poisson models, we demonstrate how terrains can be described using histograms created using a 'Poisson blob' image representation for capturing texture information. The validity of the method is corroborated using Monte Carlo simulations. The potential of the method is then demonstrated using terrain images created from bootstrap re-sampling of martian HiRISE data.
Measurement error in two-stage analyses, with application to air pollution epidemiology.
Szpiro, Adam A; Paciorek, Christopher J
2013-12-01
Public health researchers often estimate health effects of exposures (e.g., pollution, diet, lifestyle) that cannot be directly measured for study subjects. A common strategy in environmental epidemiology is to use a first-stage (exposure) model to estimate the exposure based on covariates and/or spatio-temporal proximity and to use predictions from the exposure model as the covariate of interest in the second-stage (health) model. This induces a complex form of measurement error. We propose an analytical framework and methodology that is robust to misspecification of the first-stage model and provides valid inference for the second-stage model parameter of interest. We decompose the measurement error into components analogous to classical and Berkson error and characterize properties of the estimator in the second-stage model if the first-stage model predictions are plugged in without correction. Specifically, we derive conditions for compatibility between the first- and second-stage models that guarantee consistency (and have direct and important real-world design implications), and we derive an asymptotic estimate of finite-sample bias when the compatibility conditions are satisfied. We propose a methodology that (1) corrects for finite-sample bias and (2) correctly estimates standard errors. We demonstrate the utility of our methodology in simulations and an example from air pollution epidemiology.
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.
NASA Technical Reports Server (NTRS)
Akkari, S. H.; Frost, W.
1982-01-01
The effect of rolling motion of a wing on the magnitude of error induced due to the wing vibration when measuring atmospheric turbulence with a wind probe mounted on the wing tip was investigated. The wing considered had characteristics similar to that of a B-57 Cambera aircraft, and Von Karman's cross spectrum function was used to estimate the cross-correlation of atmospheric turbulence. Although the error calculated was found to be less than that calculated when only elastic bendings and vertical motions of the wing are considered, it is still relatively large in the frequency's range close to the natural frequencies of the wing. Therefore, it is concluded that accelerometers mounted on the wing tip are needed to correct for this error, or the atmospheric velocity data must be appropriately filtered.
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
Direct measurement of the poliovirus RNA polymerase error frequency in vitro
Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B. )
1988-02-01
The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.
Study of flow rate induced measurement error in flow-through nano-hole plasmonic sensor
Tu, Long; Huang, Liang; Wang, Tianyi; Wang, Wenhui
2015-01-01
Flow-through gold film perforated with periodically arrayed sub-wavelength nano-holes can cause extraordinary optical transmission (EOT), which has recently emerged as a label-free surface plasmon resonance sensor in biochemical detection by measuring the transmission spectral shift. This paper describes a systematic study of the effect of microfluidic field on the spectrum of EOT associated with the porous gold film. To detect biochemical molecules, the sub-micron-thick film is free-standing in a microfluidic field and thus subject to hydrodynamic deformation. The film deformation alone may cause spectral shift as measurement error, which is coupled with the spectral shift as real signal associated with the molecules. However, this microfluid-induced measurement error has long been overlooked in the field and needs to be identified in order to improve the measurement accuracy. Therefore, we have conducted simulation and analytic analysis to investigate how the microfluidic flow rate affects the EOT spectrum and verified the effect through experiment with a sandwiched device combining Au/Cr/Si3N4 nano-hole film and polydimethylsiloxane microchannels. We found significant spectral blue shift associated with even small flow rates, for example, 12.60 nm for 4.2 μl/min. This measurement error corresponds to 90 times the optical resolution of the current state-of-the-art commercially available spectrometer or 8400 times the limit of detection. This really severe measurement error suggests that we should pay attention to the microfluidic parameter setting for EOT-based flow-through nano-hole sensors and adopt right scheme to improve the measurement accuracy. PMID:26649131
Topping, David J.; Wright, Scott A.
2016-05-04
these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, Jorge
2016-03-01
The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.
Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements
NASA Astrophysics Data System (ADS)
Beechem, Thomas; Yates, Luke; Graham, Samuel
2015-04-01
Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Influence of sky radiance measurement errors on inversion-retrieved aerosol properties
Torres, B.; Toledano, C.; Cachorro, V. E.; Bennouna, Y. S.; Fuertes, D.; Gonzalez, R.; Frutos, A. M. de; Berjon, A. J.; Dubovik, O.; Goloub, P.; Podvin, T.; Blarel, L.
2013-05-10
Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90's, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2 Degree-Sign in pointing and 1.2 Degree-Sign field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.
Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat
2014-04-01
Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
Correction for dynamic bias error in transmission measurements of void fraction
NASA Astrophysics Data System (ADS)
Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.
2012-12-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.
NASA Astrophysics Data System (ADS)
Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka
2016-03-01
Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.
The effect of systematic errors on the hybridization of optical critical dimension measurements
NASA Astrophysics Data System (ADS)
Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.
2015-06-01
In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.
On responder analyses when a continuous variable is dichotomized and measurement error is present.
Kunz, Michael
2011-02-01
In clinical studies results are often reported as proportions of responders, i.e. the proportion of subjects who fulfill a certain response criterion is reported, although the underlying variable of interest is continuous. In this paper, we consider the situation where a subject is defined as a responder if the (error-free) continuous measurements post-treatment are below a certain fraction of (error-free) continuous measurements obtained pre-treatment. Focus is on the one-sample case, but an extension to the two-sample case is also presented. The bias of different estimates for the proportion of responders is derived and compared. In addition, an asymptotically unbiased ML-type estimate for the proportion of responders is presented. The results are illustrated using data obtained in a clinical study investigating pre-menstrual dysphoric disorder (PMDD).
Measurement error analysis of the 3D four-wheel aligner
NASA Astrophysics Data System (ADS)
Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun
2013-10-01
Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.
The SIMEX approach to measurement error correction in meta-analysis with baseline risk as covariate.
Guolo, A
2014-05-30
This paper investigates the use of SIMEX, a simulation-based measurement error correction technique, for meta-analysis of studies involving the baseline risk of subjects in the control group as explanatory variable. The approach accounts for the measurement error affecting the information about either the outcome in the treatment group or the baseline risk available from each study, while requiring no assumption about the distribution of the true unobserved baseline risk. This robustness property, together with the feasibility of computation, makes SIMEX very attractive. The approach is suggested as an alternative to the usual likelihood analysis, which can provide misleading inferential results when the commonly assumed normal distribution for the baseline risk is violated. The performance of SIMEX is compared to the likelihood method and to the moment-based correction through an extensive simulation study and the analysis of two datasets from the medical literature.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
NASA Astrophysics Data System (ADS)
Wächtler, Christopher W.; Strasberg, Philipp; Brandes, Tobias
2016-11-01
In the derivation of fluctuation relations, and in stochastic thermodynamics in general, it is tacitly assumed that we can measure the system perfectly, i.e., without measurement errors. We here demonstrate for a driven system immersed in a single heat bath, for which the classic Jarzynski equality < {{{e}}}-β (W-{{Δ }F)}> =1 holds, how to relax this assumption. Based on a general measurement model akin to Bayesian inference we derive a general expression for the fluctuation relation of the measured work and we study the case of an overdamped Brownian particle and of a two-level system in particular. We then generalize our results further and incorporate feedback in our description. We show and argue that, if measurement errors are fully taken into account by the agent who controls and observes the system, the standard Jarzynski-Sagawa-Ueda relation should be formulated differently. We again explicitly demonstrate this for an overdamped Brownian particle and a two-level system where the fluctuation relation of the measured work differs significantly from the efficacy parameter introduced by Sagawa and Ueda. Instead, the generalized fluctuation relation under feedback control, < {{{e}}}-β (W-{{Δ }F)-I}> =1, holds only for a superobserver having perfect access to both the system and detector degrees of freedom, independently of whether or not the detector yields a noisy measurement record and whether or not we perform feedback.
Mapping of error cells in clinical measure to symmetric power space.
Abelman, H; Abelman, S
2007-09-01
During the refraction procedure, the power of the nearest equivalent sphere lens, known as the scalar power, is conserved within upper and lower bounds in the sphere (and cylinder) lens powers. Bounds are brought closer together while keeping the circle of least confusion on the retina. The sphere and cylinder powers and changes in these powers are thus dependent. Changes are depicted in the cylinder-sphere plane by error cells with one pair of parallel sides of negative gradient and the other pair aligned with the graph axis of cylinder power. Scalar power constitutes a vector space, is a meaningful ophthalmic quantity and is represented by the semi-trace of the dioptric power matrix. The purpose of this article is to map to error cells for the following: coordinates of the dioptric power matrix, its principal powers and meridians and its entries from error cells surrounding powers in sphere, cylinder and axis. Error cells in clinical measure for conserved scalar power now contain more compensatory lens powers. Such cells and their respective mappings in terms of most scientific and alternate clinical quantities now image consistently not only to the cells from where they originate but also to each other.
Error characterization in iQuam SSTs using triple collocations with satellite measurements
NASA Astrophysics Data System (ADS)
Xu, Feng; Ignatov, Alexander
2016-10-01
Various types of in situ sea surface temperature (SST) measurements have dominated during different periods of the satellite era. Their corresponding errors should be characterized to curtail the nonuniformities in calibration and validation of reprocessed historical satellite SST data. SSTs from several major in situ platform types reported in the NOAA in situ Quality Monitor (iQuam) system have been collocated with NOAA-17 Advanced Very High Resolution Radiometer (AVHRR) and Envisat Advanced Along Track Scanning Radiometer (AATSR) satellite SSTs from 2003 to 2009, produced by the European Space Agency (ESA) Climate Change Initiative (CCI) program. The standard deviations of errors in iQuam in situ and nighttime satellite CCI SSTs estimated using triple-collocation analyses are 0.75 K for ships, 0.21-0.22 K for drifters and Argo floats, 0.17 K and 0.40 K for tropical and coastal moorings, 0.35-0.38 K for AVHRR, and 0.15-0.30 K for AATSR. The distribution of in situ and satellite errors in space and time is also analyzed, along with their single-sensor error distributions.
First measurements of error fields on W7-X using flux surface mapping
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...
2016-08-03
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less
The effect of measurement error on the dose-response curve
Yoshimura, I. )
1990-07-01
In epidemiological studies for an environmental risk assessment, doses are often observed with errors. However, they have received little attention in data analysis. This paper studies the effect of measurement errors on the observed dose-response curve. Under the assumptions of the monotone likelihood ratio on errors and a monotone increasing dose-response curve, it is verified that the slope of the observed dose-response curve is likely to be gentler than the true one. The observed variance of responses are not so homogeneous as to be expected under models without errors. The estimation of parameters in a hockey-stick type dose-response curve with a threshold is considered on line of the maximum likelihood method for a functional relationship model. Numerical examples adaptable to the data in a 1986 study of the effect of air pollution that was conducted in Japan are also presented. The proposed model is proved to be suitable to the data in the example cited in this paper.
Sideslip-induced static pressure errors in flight-test measurements
NASA Technical Reports Server (NTRS)
Parks, Edwin K.; Bach, Ralph E., Jr.; Tran, Duc
1990-01-01
During lateral flight-test maneuvers of a V/STOL research aircraft, large errors in static pressure were observed. An investigation of the data showed a strong correlation of the pressure record with variations in sideslip angle. The sensors for both measurements were located on a standard air-data nose boom. This paper descries an algorithm based on potential flow over a cylinder that was developed to correct the pressure record for sideslip-induced errors. In order to properly apply the correction algorithm, it was necessary to estimate and correct the lag error in the pressure system. The method developed for estimating pressure lag is based on the coupling of sideslip activity into the static ports and can be used as a standard flight-test procedure. The paper discusses the estimation procedure and presents the corrected static-pressure record for a typical lateral maneuver. It is shown that application of the correction algorithm effectifvely attenuates sideslip-induced errors.
First measurements of error fields on W7-X using flux surface mapping
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; Biedermann, Christoph; Pedersen, Thomas Sunn
2016-08-03
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field '${\\rlap{-}\\ \\iota} =1/2$ ' magnetic configuration (${\\rlap{-}\\ \\iota} =\\iota /2\\pi $ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $\\sim 0.04$ m intrinsic island chain with a ${{130}^{\\circ}}$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.
Sensitivity of Force Specifications to the Errors in Measuring the Interface Force
NASA Technical Reports Server (NTRS)
Worth, Daniel
1999-01-01
Force-Limited Random Vibration Testing has been applied in the last several years at NASA/GSFC for various programs at the instrument and system level. Different techniques have been developed over the last few decades to estimate the dynamic forces that the test article under consideration will encounter in the operational environment. Some of these techniques are described in the handbook, NASA-HDBK-7004, and the monograph, NASA-RP-1403. A key element in the ability to perform force-limited testing is multi-component force gauges. This paper will show how some measurement and calibration errors in force gauges are compensated for w en tie force specification is calculated. The resulting notches in the acceleration spectrum, when a random vibration test is performed, are the same as the notches produced during an uncompensated test that has no measurement errors. The paper will also present the results of tests that were used to validate this compensation. Knowing that the force specification can compensate for some measurement errors allows tests to continue after force gauge failures or allows dummy gauges to be used in places that are inaccessible.
NASA Technical Reports Server (NTRS)
Fulton, C. L.; Harris, R. L., Jr.
1980-01-01
Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.
Estimation of the sampling interval error for LED measurement with a goniophotometer
NASA Astrophysics Data System (ADS)
Zhao, Weiqiang; Liu, Hui; Liu, Jian
2013-06-01
Using a goniophotometer to implant a total luminous flux measurement, an error comes from the sampling interval, especially in the situation for LED measurement. In this work, we use computer calculations to estimate the effect of sampling interval on the measuring the total luminous flux for four typical kinds of LEDs, whose spatial distributions of luminous intensity is similar to those LEDs shown in CIE 127 paper. Four basic kinds of mathematical functions are selected to simulate the distribution curves. Axial symmetric type LED and non-axial symmetric type LED are both take amount of. We consider polar angle sampling interval of 0.5°, 1°, 2°, and 5° respectively in one rotation for axial symmetric type, and consider azimuth angle sampling interval of 18°, 15°, 12°, 10° and 5° respectively for non-axial symmetric type. We noted that the error is strongly related to spatial distribution. However, for common LED light sources the calculation results show that a usage of polar angle sampling interval of 2° and azimuth angle sampling interval of 15° is recommended. The systematic error of sampling interval for a goniophotometer can be controlled at the level of 0.3%. For high precise level, the usage of polar angle sampling interval of 1° and azimuth angle sampling interval of 10° should be used.
A Qualitative Analysis of Rater Behavior on an L2 Speaking Assessment
ERIC Educational Resources Information Center
Kim, Hyun Jung
2015-01-01
Human raters are normally involved in L2 performance assessment; as a result, rater behavior has been widely investigated to reduce rater effects on test scores and to provide validity arguments. Yet raters' cognition and use of rubrics in their actual rating have rarely been explored qualitatively in L2 speaking assessments. In this study three…
Rater Expertise in a Second Language Speaking Assessment: The Influence of Training and Experience
ERIC Educational Resources Information Center
Davis, Lawrence Edward
2012-01-01
Speaking performance tests typically employ raters to produce scores; accordingly, variability in raters' scoring decisions has important consequences for test reliability and validity. One such source of variability is the rater's level of expertise in scoring. Therefore, it is important to understand how raters' performance is influenced by…
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid
NASA Astrophysics Data System (ADS)
Xiang, Rong
2014-09-01
This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.
Kim, Seong-Gil; Kim, Myoung-Kwon
2015-09-01
[Purpose] The purpose of this study was to examine the intra- and inter-rater reliabilities of the Short Form Berg Balance Scale in institutionalized elderly people. [Subjects and Methods] A total of 30 elderly people in a nursing facility in Y city, South Korea, participated in this study. Two examiners administered the Short Form Berg Balance Scale to one subject to investigate inter-rater reliability. After a week, the same examiners administered the Short Form Berg Balance Scale once more to investigate intra-rater reliability. [Results] The intra-rater reliability was 0.83. The inter-rater reliability was 0.79. Both reliabilities were high (more than 0.7). [Conclusion] The Short Form Berg Balance Scale is a version of the Berg Balance Scale shortened by reducing the number of items, but its reliabilities were not lower than those of the Berg Balance Scale. The Short Form Berg Balance Scale can be useful clinically due to its short measurement time.
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Hao, Jiangang; Koester, Benjamin P.; Mckay, Timothy A.; Rykoff, Eli S.; Rozo, Eduardo; Evrard, August; Annis, James; Becker, Matthew; Busha, Michael; Gerdes, David; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.
Digitally modulated bit error rate measurement system for microwave component evaluation
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.; Budinger, James M.
1989-01-01
The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.
Error analysis of DIAL measurements of ozone by a Shuttle excimer lidar
NASA Technical Reports Server (NTRS)
Uchino, Osamu; Mccormick, M. Patrick; Mcmaster, Leonard R.; Swissler, Thomas J.
1986-01-01
Attention is given to an error analysis of DIAL measurements of stratospheric ozone from the Space Shuttle. It is shown that a transmitter system consisting of a KrF excimer laser pumping gas cells of H2 or D2 producing output wavelengths in the near UV is useful for the measurement of ozone in a 15-50-km altitude range. It is noted that for increased levels of stratospheric aerosols experienced after violent volcanic eruptions, the relative uncertainties of ozone densities will be large in the region below about 24 km.
NASA Astrophysics Data System (ADS)
Holler, Mirko; Raabe, Jörg
2015-05-01
The nonaxial interferometric position measurement of rotating objects can be performed by imaging the laser beam of the interferometer to a rotating mirror which can be a sphere or a cylinder. This, however, requires such rotating mirrors to be centered on the axis of rotation as a wobble would result in loss of the interference signal. We present a tracking-type interferometer that performs such measurement in a general case where the rotating mirror may wobble on the axis of rotation, or even where the axis of rotation may be translating in space. Aside from tracking, meaning to measure and follow the position of the rotating mirror, the interferometric measurement errors induced by the tracking motion of the interferometer itself are optically compensated, preserving nanometric measurement accuracy. As an example, we show the application of this interferometer in a scanning x-ray tomography instrument.
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
ERIC Educational Resources Information Center
Worts, Diana; Sacker, Amanda; McDonough, Peggy
2010-01-01
This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
The effect of clock, media, and station location errors on Doppler measurement accuracy
NASA Technical Reports Server (NTRS)
Miller, J. K.
1993-01-01
Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.
NASA Astrophysics Data System (ADS)
Hu, X.; Prabhu, S.; Atamturktur, S.; Cogan, S.
2017-02-01
Model-based damage detection entails the calibration of damage-indicative parameters in a physics-based computer model of an undamaged structural system against measurements collected from its damaged counterpart. The approach relies on the premise that changes identified in the damage-indicative parameters during calibration reveal the structural damage in the system. In model-based damage detection, model calibration has traditionally been treated as a process, solely operating on the model output without incorporating available knowledge regarding the underlying mechanistic behavior of the structural system. In this paper, the authors propose a novel approach for model-based damage detection by implementing the Extended Constitutive Relation Error (ECRE), a method developed for error localization in finite element models. The ECRE method was originally conceived to identify discrepancies between experimental measurements and model predictions for a structure in a given healthy state. Implementing ECRE for damage detection leads to the evaluation of a structure in varying healthy states and determination of discrepancy between model predictions and experiments due to damage. The authors developed an ECRE-based damage detection procedure in which the model error and structural damage are identified in two distinct steps and demonstrate feasibility of the procedure in identifying the presence, location and relative severity of damage on a scaled two-story steel frame for damage scenarios of varying type and severity.
Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes
ERIC Educational Resources Information Center
Methe, Scott A.; Briesch, Amy M.; Hulac, David
2015-01-01
At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…
Indirect measurement of machine tool motion axis error with single laser tracker
NASA Astrophysics Data System (ADS)
Wu, Zhaoyong; Li, Liangliang; Du, Zhengchun
2015-02-01
For high-precision machining, a convenient and accurate detection of motion error for machine tools is significant. Among common detection methods such as the ball-bar method, the laser tracker approach has received much more attention. As a high-accuracy measurement device, laser tracker is capable of long-distance and dynamic measurement, which increases much flexibility during the measurement process. However, existing methods are not so satisfactory in measurement cost, operability or applicability. Currently, a plausible method is called the single-station and time-sharing method, but it needs a large working area all around the machine tool, thus leaving itself not suitable for the machine tools surrounded by a protective cover. In this paper, a novel and convenient positioning error measurement approach by utilizing a single laser tracker is proposed, followed by two corresponding mathematical models including a laser-tracker base-point-coordinate model and a target-mirror-coordinates model. Also, an auxiliary apparatus for target mirrors to be placed on is designed, for which sensitivity analysis and Monte-Carlo simulation are conducted to optimize the dimension. Based on the method proposed, a real experiment using single API TRACKER 3 assisted by the auxiliary apparatus is carried out and a verification experiment using a traditional RENISHAW XL-80 interferometer is conducted under the same condition for comparison. Both results demonstrate a great increase in the Y-axis positioning error of machine tool. Theoretical and experimental studies together verify the feasibility of this method which has a more convenient operation and wider application in various kinds of machine tools.
Isothermal calorimetry: impact of measurements error on heat of reaction and kinetic calculations.
Papadaki, Maria; Nawada, Hosadu P; Gao, Jun; Fergusson-Rees, Andrew; Smith, Michael
2007-04-11
Heat flow and power compensation calorimetry measures the power generation of a reaction via an energy balance over an appropriately designed isothermal reactor. However, the measurement of the power generated by a reaction is a relative measurement, and calibrations are used to eliminate the contribution of a number of unknown factors. In this work the effect of the error in the measurement of temperature of electric power used in the calibrations and the heat transfer coefficient and baseline is assessed. It has been shown that the error in all aforementioned quantities reflects on the baseline and it can have a very serious impact on the accuracy of the measurement. The influence of the fluctuation of ambient temperature has been evaluated and a means of a correction that reduces its impact has been implemented. The temperature of dosed material is affected by the heat loses if reaction is performed at high temperature and low dosing rate. An experimental methodology is presented that can provide means of assessment of the actual temperature of the dosed material. Depending on the reacting system, the heat of evaporation could be included in the baseline, especially if non-condensable gases are produced during the course of the reaction.
Error analysis of Raman differential absorption lidar ozone measurements in ice clouds.
Reichardt, J
2000-11-20
A formalism for the error treatment of lidar ozone measurements with the Raman differential absorption lidar technique is presented. In the presence of clouds wavelength-dependent multiple scattering and cloud-particle extinction are the main sources of systematic errors in ozone measurements and necessitate a correction of the measured ozone profiles. Model calculations are performed to describe the influence of cirrus and polar stratospheric clouds on the ozone. It is found that it is sufficient to account for cloud-particle scattering and Rayleigh scattering in and above the cloud; boundary-layer aerosols and the atmospheric column below the cloud can be neglected for the ozone correction. Furthermore, if the extinction coefficient of the cloud is ?0.1 km(-1), the effect in the cloud is proportional to the effective particle extinction and to a particle correction function determined in the limit of negligible molecular scattering. The particle correction function depends on the scattering behavior of the cloud particles, the cloud geometric structure, and the lidar system parameters. Because of the differential extinction of light that has undergone one or more small-angle scattering processes within the cloud, the cloud effect on ozone extends to altitudes above the cloud. The various influencing parameters imply that the particle-related ozone correction has to be calculated for each individual measurement. Examples of ozone measurements in cirrus clouds are discussed.
NASA Astrophysics Data System (ADS)
Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.
1998-06-01
Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.
A study of GPS measurement errors due to noise and multipath interference for CGADS
NASA Technical Reports Server (NTRS)
Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.
1996-01-01
This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.
NASA Astrophysics Data System (ADS)
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.
1991-01-01
The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p∧4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
NASA Astrophysics Data System (ADS)
Du, Zhengchun; Zhu, Mengrui; Wu, Zhaoyong; Yang, Jianguo
2016-12-01
The uncertainty determination of the geometrical feature measurement for coordinate measuring machines (CMMs) is an essential part in the reliable quality control process. However, the most commonly-used methods for uncertainty assessment are difficult and require not only a large number of repeated measurements but also rich operation experience. Based on the error ellipse theory and the Monte Carlo simulation method, an uncertainty evaluation method for CMM measurements is presented. For circular features, the uncertainty evaluation model was established and extended into the use of an application of two holes’ central distance measurement through Monte Carlo Simulation. The verification experiment of the new method was conducted and results were compared with the traditional ones and they fit reasonably well, which proved the validity of the proposed method.
Measured and predicted root-mean-square errors in square and triangular antenna mesh facets
NASA Technical Reports Server (NTRS)
Fichter, W. B.
1989-01-01
Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.
NASA Astrophysics Data System (ADS)
Ming, Aiguo; Kajitani, Makoto; Kanamori, Chisato; Ishikawa, Jiro
The characteristics of angle transmission mechanisms exert a great influence on the servo performance in the robotic or mechatronic mechanism. Especially, the backlash of angle transmission mechanism is preferable the small amount. Recently, some new types of gear reducers with non-backlash have been developed for robots. However, the measurement and evaluation method of the backlash of gear trains has not been considered except old methods which can statically measure at only several meshing points of gears. This paper proposes an overall performance testing method of angle transmission mechanisms for the mechatronic systems. This method can measure the angle transmission error both clockwise and counterclockwise. In addition the backlash can be continuously measured in all meshing positions automatically. This system has been applied to the testing process in the production line of gear reducers for robots, and it has been effective for reducing the backlash of the gear trains.
McCrone, John T.
2016-01-01
ABSTRACT With next-generation sequencing technologies, it is now feasible to efficiently sequence patient-derived virus populations at a depth of coverage sufficient to detect rare variants. However, each sequencing platform has characteristic error profiles, and sample collection, target amplification, and library preparation are additional processes whereby errors are introduced and propagated. Many studies account for these errors by using ad hoc quality thresholds and/or previously published statistical algorithms. Despite common usage, the majority of these approaches have not been validated under conditions that characterize many studies of intrahost diversity. Here, we use defined populations of influenza virus to mimic the diversity and titer typically found in patient-derived samples. We identified single-nucleotide variants using two commonly employed variant callers, DeepSNV and LoFreq. We found that the accuracy of these variant callers was lower than expected and exquisitely sensitive to the input titer. Small reductions in specificity had a significant impact on the number of minority variants identified and subsequent measures of diversity. We were able to increase the specificity of DeepSNV to >99.95% by applying an empirically validated set of quality thresholds. When applied to a set of influenza virus samples from a household-based cohort study, these changes resulted in a 10-fold reduction in measurements of viral diversity. We have made our sequence data and analysis code available so that others may improve on our work and use our data set to benchmark their own bioinformatics pipelines. Our work demonstrates that inadequate quality control and validation can lead to significant overestimation of intrahost diversity. IMPORTANCE Advances in sequencing technology have made it feasible to sequence patient-derived viral samples at a level sufficient for detection of rare mutations. These high-throughput, cost-effective methods are revolutionizing
Noise and measurement errors in a practical two-state quantum bit commitment protocol
NASA Astrophysics Data System (ADS)
Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola
2014-05-01
We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.
Probable errors in width distributions of sea ice leads measured along a transect
NASA Technical Reports Server (NTRS)
Key, J.; Peckham, S.
1991-01-01
The degree of error expected in the measurement of widths of sea ice leads along a single transect are examined in a probabilistic sense under assumed orientation and width distributions, where both isotropic and anisotropic lead orientations are examined. Methods are developed for estimating the distribution of 'actual' widths (measured perpendicular to the local lead orientation) knowing the 'apparent' width distribution (measured along the transect), and vice versa. The distribution of errors, defined as the difference between the actual and apparent lead width, can be estimated from the two width distributions, and all moments of this distribution can be determined. The problem is illustrated with Landsat imagery and the procedure is applied to a submarine sonar transect. Results are determined for a range of geometries, and indicate the importance of orientation information if data sampled along a transect are to be used for the description of lead geometries. While the application here is to sea ice leads, the methodology can be applied to measurements of any linear feature.
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin
2015-11-02
A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.
Errors in measuring blood gases in the intensive care unit: effect of delay in estimation.
Woolley, Andrew; Hickling, Keith
2003-03-01
Arterial blood gas measurement is subject to a number of potential sources of error. We investigated some of these in the intensive care unit (ICU). We audited samples for adequate volume and the presence of air and found that all samples were of adequate volume, but 40% contained bubbles or froth. We compared pulse oximeter estimations of oxygen saturation (SpO(2)) with laboratory estimates (SO(2)) from arterial blood samples, and found that there was less than a 5% chance of a difference of 5% or more. We audited the delay between sampling and processing and looked for errors arising as a result. We found that 4% of samples waited longer than 30 minutes to be analyzed in the laboratory, but that there was no correlation between delay and error in partial pressure of oxygen (PO(2)), carbon dioxide (PCO(2)), or SO(2). We performed a bench study to document the changes in PO(2) and PCO(2) over time with samples stored at room temperature and on ice. We found that samples in 1.5-mL PICO 70 syringes (Radiometer Medical A/S, Bronshoj, Denmark) were stable for PO(2) and SO(2) for up to 30 minutes either at room temperature or kept in iced water, and that changes after 60 minutes were small and unlikely to be clinically significant. PCO(2) showed a statistically significant increase after 20 minutes at room temperature, but the changes were not clinically significant.
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-08-05
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
NASA Astrophysics Data System (ADS)
Higuchi, Masato; Vu, Thanh-Tung; Aketagawa, Masato
2016-11-01
The conventional method of measuring the radial, axial and angular spindle motion is complicated and needs large spaces. Smaller instrument is better in terms of accurate and practical measurement. A method of measuring spindle error motion using a sinusoidal phase modulation and a concentric circle grating was described in the past. In the method, the concentric circle grating with fine pitch is attached to the spindle. Three optical sensors are fixed under grating and observe appropriate position of grating. The each optical sensor consists of a sinusoidal frequency modulated semiconductor laser as the light source, and two interferometers. One interferometer measures an axial spindle motion by detecting the interference fringe between reflected beam from fixed mirror and 0th-order diffracted beam. Another interferometer measures a radial spindle motion by detecting the interference fringe between ±2nd-order diffracted beams. With these optical sensor, 3 axial and 3 radial displacement of grating can be measured. From these measured displacements, axial, radial and angular spindle motion is calculated concurrently. In the previous experiment, concurrent measurement of the one axial and one radial spindle displacement at 4rpm was described. In this paper, the sinusoidal frequency modulation realized by modulating injection current is used instead of the sinusoidal phase modulation, which contributes simplicity of the instrument. Furthermore, concurrent measurement of the 5 axis (1 axial, 2 radial and 2 angular displacements) spindle motion at 4000rpm may be described.
Technology Transfer Automated Retrieval System (TEKTRAN)
Error in rater estimates of plant disease severity occur, and standard area diagrams (SADs) help improve accuracy and reliability. The effects of diagram number in a SAD set on accuracy and reliability is unknown. The objective of this study was to compare estimates of pecan scab severity made witho...
Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole
2009-11-01
This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.
Hindasageri, V; Vedula, R P; Prabhu, S V
2013-02-01
Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.
Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19
ERIC Educational Resources Information Center
Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2008-01-01
Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…
Overview of Measuring Effect Sizes: The Effect of Measurement Error. Brief 2
ERIC Educational Resources Information Center
Boyd, Don; Grossman, Pam; Lankford, Hamp; Loeb, Susanna; Wyckoff, Jim
2008-01-01
The use of value-added models in education research has expanded rapidly. These models allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. An important question is whether such effects are sufficiently large to achieve various policy goals. Judging whether a change in…
Arakawa, H.; Kawano, Y.; Itami, K.
2012-10-15
A new method for the comparative verification of electron density measurements obtained with a tangential interferometer and a polarimeter during a discharge is proposed. The possible errors associated with the interferometer and polarimeter are classified by the time required for their identification. Based on the characteristics of the errors, the fringe shift error of the interferometer and the low-frequency noise of the polarimeter were identified and corrected for the JT-60U tangential interferometer/polarimeter system.
First measurements of error fields on W7-X using flux surface mapping
NASA Astrophysics Data System (ADS)
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; Biedermann, Christoph; Pedersen, Thomas Sunn; the W7-X Team
2016-10-01
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field ‘{\\rlap- \\iota} =1/2 ’ magnetic configuration ({\\rlap- \\iota} =\\iota /2π ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small ∼ 0.04 m intrinsic island chain with a {{130}\\circ} phase relative to the first module of the W7-X experiment. These error fields are determined to be small and easily correctable by the trim coil system. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A
2016-05-15
Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing.
Error field measurement, correction and heat flux balancing on Wendelstein 7-X
NASA Astrophysics Data System (ADS)
Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; Israeli, Ben; Wurden, Glen A.; Wenzel, Uwe; Andreeva, Tamara; Bozhenkov, Sergey; Biedermann, Christoph; Kocsis, Gábor; Szepesi, Tamás; Geiger, Joachim; Pedersen, Thomas Sunn; Gates, David; The W7-X Team
2017-04-01
The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m = 1/2 island chain in a specially designed magnetic configuration. The flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small ∼4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n = 1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m = 5/5 island chain. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world
Casas-Cordero, C.; Kreuter, F.; Wang, Y.; Babey, S.
2013-01-01
Summary Interviewer observations made during the process of data collection are currently used to inform responsive design decisions, to expand the set of covariates for nonresponse adjustments, to explain participation in surveys, and to assess nonresponse bias. However, little effort has been made to assess the quality of such interviewer observations. Using data from the Los Angeles Family and Neighbourhood Survey (L.A.FANS), this paper examines measurement error properties of interviewer observations of neighbourhood characteristics. Block level and interviewer covariates are used in multilevel models to explain interviewer variation in the observations of neighbourhood features. PMID:24159255
Research on photoelectric test and measurment for form and position error
NASA Astrophysics Data System (ADS)
Xie, Jinsong
2002-09-01
Structure and principles of a photoelectric test and measurement system for form and position error are described. A special optical system using laser beam characteristics was designed to ensure the uniformity of the scanning speed. To meet the requirements of the system a precision mechanical system a servo-control system and an computing and data processing system are designed. As a result a high-speed efficiency and high precision non-contact auto-test is realized. This is a promotion to the development of "advanced manufacture technology".
Measurement updating using the U-D factorization. [for Kalman matrix filter error covariance
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
A new mechanization of the Kalman updating algorithm based on a U-D factorization of the estimate error covariance is introduced. Efficient and stable updating recursions are developed for the unit upper triangular factor U and the diagonal factor D, treating only the parameter estimation problem. Properties of the factorization update performed here include efficient one point at a time processing that requires little more computation than the optimal but numerically unstable conventional Kalman measurement update algorithm, and stability that compares with the square root filter.
Assessment of sources of error in Background Oriented Schlieren (BOS) measurements
NASA Astrophysics Data System (ADS)
Rajendran, Lalit; Singh, Bhavini; Giarra, Matthew; Bane, Sally; Vlachos, Pavlos
2016-11-01
Background Oriented Schlieren (BOS) is used to measure density gradients in a flow by tracking the apparent distortion of a target dot pattern. The quality of a BOS measurement depends on several factors such as the dot pattern, illumination, density gradients, optical system, cross-correlation algorithms and density reconstruction. To understand their contributions to the final error in the measurement and to develop an optimal set of design rules, we generate high fidelity synthetic images using ray tracing simulations. Past studies use ad-hoc models (or none) for simulating these effects and do not represent the issues introduced in a typical BOS setup, thereby limiting their utility. We have developed and implemented an image generation methodology based on ray tracing, where light rays emitted from a dot pattern are traced through the experimental setup including the density gradients, to generate high fidelity images representative of a real experiment. We apply this methodology to perform a comprehensive analysis of the various sources of error in the BOS technique and to better understand the issues involved in designing a successful experiment. The results of this study can guide future experiments and provide directions to improve the image analysis tools.
NASA Astrophysics Data System (ADS)
Branciard, Cyril
2014-02-01
The quantification of the "measurement uncertainty"aspect of Heisenberg's uncertainty principle—that is, the study of trade-offs between accuracy and disturbance, or between accuracies in an approximate joint measurement on two incompatible observables—has regained a lot of interest recently. Several approaches have been proposed and debated. In this paper we consider Ozawa's definitions for inaccuracies (as root-mean-square errors) in approximate joint measurements, and study how these are constrained in different cases, whether one specifies certain properties of the approximations—namely their standard deviations and/or their bias—or not. Extending our previous work [C. Branciard, Proc. Natl. Acad. Sci. USA 110, 6742 (2013), 10.1073/pnas.1219331110], we derive error-trade-off relations, which we prove to be tight for pure states. We show explicitly how all previously known relations for Ozawa's inaccuracies follow from ours. While our relations are in general not tight for mixed states, we show how these can be strengthened and how tight relations can still be obtained in that case.
Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.
Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun
2016-10-12
The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.
Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations.
Wu, Fengcheng; Li, Ang; Xie, Pinhua; Chen, Hao; Hu, Zhaokun; Zhang, Qiong; Liu, Jianguo; Liu, Wenqing
2017-01-25
Mobile differential optical absorption spectroscopy (mobile DOAS) is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities) in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO₂ emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS) under different drive speeds and wind fields revealed that the optimal drive velocity is 30-40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO₂ and NO₂ emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy.