ERIC Educational Resources Information Center
Kachchaf, Rachel; Solano-Flores, Guillermo
2012-01-01
We examined how rater language background affects the scoring of short-answer, open-ended test items in the assessment of English language learners (ELLs). Four native English and four native Spanish-speaking certified bilingual teachers scored 107 responses of fourth- and fifth-grade Spanish-speaking ELLs to mathematics items administered in…
How Good Are Our Raters? Rater Errors in Clinical Skills Assessment
ERIC Educational Resources Information Center
Iramaneerat, Cherdsak; Yudkowsky, Rachel
2006-01-01
A multi-faceted Rasch measurement (MFRM) model was used to analyze a clinical skills assessment of 173 fourth-year medical students in a Midwestern medical school to investigate four types of rater errors: leniency, inconsistency, halo, and restriction of range. Each student performed six clinical tasks with six standardized patients (SPs), who…
Beltran-Alacreu, Hector; López-de-Uralde-Villanueva, Ibai; Paris-Alemany, Alba; Angulo-Díaz-Parreño, Santiago; La Touche, Roy
2014-01-01
[Purpose] The aim of this study was to determine the inter-rater and intra-rater reliability of the mandibular range of motion (ROM) considering the neutral craniocervical position when performing the measurements. [Subjects and Methods] The sample consisted of 50 asymptomatic subjects. Two raters measured four mandibular ROMs (maximal mouth opening (MMO), laterals, and protrusion) using the craniomandibular scale. Subjects alternated between raters, receiving two complete trials per day, two days apart. Intra- and inter-rater reliability was determined using intra-class correlation coefficients (ICCs). Bland-Altman analysis was used to assess reliability, bias, and variability. Finally, the standard error of measurement (SEM) and minimal detectable change (MDC) were analyzed to measure responsiveness. [Results] Reliability was good for MMO (inter-rater, ICC= 0.95−0.96; intra-rater, ICC= 0.95−0.96) and for protrusion (inter-rater, ICC= 0.92−0.94; intra-rater, ICC= 0.93−0.96). Reliability was moderate for lateral excursions. The MMO and protrusion SEM ranged from 0.74 to 0.82 mm and from 0.29 to 0.49 mm, while the MDCs ranged from 1.73 to 1.91 mm and from 0.69 to 0.14 mm respectively. The analysis showed no random or systematic error, suggesting that effect learning did not affect reliability. [Conclusion] A standardized protocol for assessment of mandibular ROM in a neutral craniocervical position obtained good inter- and intra-rater reliability for MMO and protrusion and moderate inter- and intra-rater reliability for lateral excursions. PMID:25013296
Examining rating quality in writing assessment: rater agreement, error, and accuracy.
Wind, Stefanie A; Engelhard, George
2012-01-01
The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.
Do Raters Demonstrate Halo Error When Scoring a Series of Responses?
ERIC Educational Resources Information Center
Ridge, Kirk
This study investigated whether raters in two different training groups would demonstrate halo error when each rater scored all five responses to five different mathematics performance-based items from each student. One group of 20 raters was trained by an experienced scoring director with item-specific scoring rubrics and the opportunity to…
ERIC Educational Resources Information Center
Sheehan, Dwayne P.; Lafave, Mark R.; Katz, Larry
2011-01-01
This study was designed to test the intra- and inter-rater reliability of the University of North Carolina's Balance Error Scoring System in 9- and 10-year-old children. Additionally, a modified version of the Balance Error Scoring System was tested to determine if it was more sensitive in this population ("raw scores"). Forty-six…
ERIC Educational Resources Information Center
Sheehan, Dwayne P.; Lafave, Mark R.; Katz, Larry
2011-01-01
This study was designed to test the intra- and inter-rater reliability of the University of North Carolina's Balance Error Scoring System in 9- and 10-year-old children. Additionally, a modified version of the Balance Error Scoring System was tested to determine if it was more sensitive in this population ("raw scores"). Forty-six…
Longitudinal Rater Modeling with Splines
ERIC Educational Resources Information Center
Dobria, Lidia
2011-01-01
Performance assessments rely on the expert judgment of raters for the measurement of the quality of responses, and raters unavoidably introduce error in the scoring process. Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, even after accounting for differences in examinee…
Longitudinal Rater Modeling with Splines
ERIC Educational Resources Information Center
Dobria, Lidia
2011-01-01
Performance assessments rely on the expert judgment of raters for the measurement of the quality of responses, and raters unavoidably introduce error in the scoring process. Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, even after accounting for differences in examinee…
ERIC Educational Resources Information Center
Cason, Gerald J.; And Others
Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…
ERIC Educational Resources Information Center
Erdogan, Semra; Orekici Temel, Gülhan; Selvi, Hüseyin; Ersöz Kaya, Irem
2017-01-01
Taking more than one measurement of the same variable also hosts the possibility of contamination from error sources, both singly and in combination as a result of interactions. Therefore, although the internal consistency of scores received from measurement tools is examined by itself, it is necessary to ensure interrater or intra-rater agreement…
Lang, W Steve; Wilkerson, Judy R; Rea, Dorothy C; Quinn, David; Batchelder, Heather L; Englehart, Dierdre S; Jennings, Kelly J
2014-01-01
The purpose of this study was to examine the extent to which raters' subjectivity impacts measures of teacher dispositions using the Dispositions Assessments Aligned with Teacher Standards (DAATS) battery. This is an important component of the collection of evidence of validity and reliability of inferences made using the scale. It also provides needed support for the use of subjective affective measures in teacher training and other professional preparation programs, since these measures are often feared to be unreliable because of rater effect. It demonstrates the advantages of using the Multi-Faceted Rasch Model as a better alternative to the typical methods used in preparation programs, such as Cohen's Kappa. DAATS instruments require subjective scoring using a six-point rating scale derived from the affective taxonomy as defined by Krathwohl, Bloom, and Masia (1956). Rater effect is a serious challenge and can worsen or drift over time. Errors in rater judgment can impact the accuracy of ratings, and these effects are common, but can be lessened through training of raters and monitoring of their efforts. This effort uses the multifaceted Rasch measurement models (MFRM) to detect and understand the nature of these effects.
Tous-Fajardo, Julio; Moras, Gerard; Rodríguez-Jiménez, Sergio; Usach, Robert; Doutres, Daniel Moreno; Maffiuletti, Nicola A
2010-08-01
Tensiomyography (TMG) is a relatively novel technique to assess muscle mechanical response based on radial muscle belly displacement consecutive to a single electrical stimulus. Although intra-session reliability has been found to be good, inter-rater reliability and the influence of sensor repositioning and electrodes placement on TMG measurements is unknown. The purpose of this study was to analyze the inter-rater reliability of vastus medialis muscle contractile property measurements obtained with TMG as well as the effect of inter-electrode distance (IED). Five contractile parameters were analyzed from vastus medialis muscle belly displacement-time curves: maximal displacement (Dm), contraction time (Tc), sustain time (Ts), delay time (Td), and half-relaxation time (Tr). The inter-rater reliability and IED effect on these measurements were evaluated in 18 subjects. Intra-class correlation coefficients, standard errors of measurement, Bland and Altman systematic bias and random error as well as coefficient of variations were used as measures of reliability. Overall, a good to excellent inter-rater reliability was found for all contractile parameters, except Tr, which showed insufficient reliability. Alterations in IED significantly affected Dm with a trend for all the other parameters. The present results legitimate the use of TMG for the assessment of vastus medialis muscle contractile properties, particularly for Dm and Tc. It is recommended to avoid Tr quantification and IED modifications during multiple TMG measurements.
Measuring Essay Assessment: Intra-Rater and Inter-Rater Reliability
ERIC Educational Resources Information Center
Kayapinar, Ulas
2014-01-01
Problem Statement: There have been many attempts to research the effective assessment of writing ability, and many proposals for how this might be done. In this sense, rater reliability plays a crucial role for making vital decisions about testees in different turning points of both educational and professional life. Intra-rater and inter-rater…
Bodilsen, Ann Christine; Juul-Larsen, Helle Gybel; Petersen, Janne; Beyer, Nina; Andersen, Ove; Bandholm, Thomas
2015-01-01
Objective Physical performance measures can be used to predict functional decline and increased dependency in older persons. However, few studies have assessed the feasibility or reliability of such measures in hospitalized older patients. Here we assessed the feasibility and inter-rater reliability of four simple measures of physical performance in acutely admitted older medical patients. Design During the first 24 hours of hospitalization, the following were assessed twice by different raters in 52 (≥ 65 years) patients admitted for acute medical illness: isometric hand grip strength, 4-meter gait speed, 30-s chair stand and Cumulated Ambulation Score. Relative reliability was expressed as weighted kappa for the Cumulated Ambulation Score or as intra-class correlation coefficient (ICC1,1) and lower limit of the 95%-confidence interval (LL95%) for grip strength, gait speed, and 30-s chair stand. Absolute reliability was expressed as the standard error of measurement and the smallest real difference as a percentage of their respective means (SEM% and SRD%). Results The primary reasons for admission of the 52 included patients were infectious disease and cardiovascular illness. The mean± SD age was 78±8.3 years, and 73.1% were women. All patients performed grip strength and Cumulated Ambulation Score testing, 81% performed the gait speed test, and 54% completed the 30-s chair stand test (46% were unable to rise without using the armrests). No systematic bias was found between first and second tests or between raters. The weighted kappa for the Cumulated Ambulation Score was 0.76 (0.60–0.92). The ICC1,1 values were as follows: grip strength, 0.95 (LL95% 0.92); gait speed, 0.92 (LL95% 0.73), and 30-s chair stand, 0.82 (LL95% 0.67). The SEM% values for grip strength, gait speed, and 30-s chair stand were 8%, 7%, and 18%, and the SRD95% values were 22%, 17%, and 49%. Conclusion In acutely admitted older medical patients, grip strength, gait speed, and the
Inter-Rectus Distance Measurement Using Ultrasound Imaging: Does the Rater Matter?
Keshwani, Nadia; Hills, Nicole; McLean, Linda
2016-01-01
Purpose: To investigate the interrater reliability of inter-rectus distance (IRD) measured from ultrasound images acquired at rest and during a head-lift task in parous women and to establish the standard error of measurement (SEM) and minimal detectable change (MDC) between two raters. Methods: Two physiotherapists independently acquired ultrasound images of the anterior abdominal wall from 17 parous women and measured IRD at four locations along the linea alba: at the superior border of the umbilicus, at 3 cm and 5 cm above the superior border of the umbilicus, and at 3 cm below the inferior border of the umbilicus. The interrater reliability of the IRD measurements was determined using intra-class correlation coefficients (ICCs). Bland-Altman analyses were used to detect bias between the raters, and SEM and MDC values were established for each measurement site. Results: When the two raters performed their own image acquisition and processing, ICCs(3,5) ranged from 0.72 to 0.91 at rest and from 0.63 to 0.96 during head lift, depending on the anatomical measurement site. Bland-Altman analyses revealed no systematic bias between the raters. SEM values ranged from 0.23 cm to 0.71 cm, and MDC values ranged from 0.64 cm to 1.97 cm. Conclusion: When using ultrasound imaging to measure IRD in women, it is acceptable for different therapists to compare IRDs between patients and within patients over time if IRD is measured above or below the umbilicus. Interrater reliability of IRD measurement is poorest at the level of the superior border of the umbilicus.
Gellhorn, Alfred C; Carlson, M Jake
2013-05-01
The use of ultrasound (US) to perform quantitative measurements of musculoskeletal tissues requires accurate and reliable measurements between investigators and ultrasound machines. The objective of this study was to evaluate inter-rater and intra-rater reliability of patellar tendon measurements between providers with different levels of US experience and inter-machine reliability of US machines. Sixteen subjects without a history of knee pain were evaluated with US examinations of the patellar tendon. Each tendon was scanned independently by two investigators using two different ultrasound machines. Tendon length and cross-sectional area (CSA) were obtained, and examiners were blinded to each other's results. Tendon length was measured using a validated system involving surface markers and calipers, and CSA was measured using each machine's measuring software. Intra-class correlation coefficients (ICCs) were used to determine reliability of measurements between observers, where ICC > 0.75 was considered good and ICC > 0.9 was considered excellent. Inter-rater reliability between sonographers was excellent and revealed an ICC of 0.90 to 0.92 for patellar tendon CSA and an ICC of 0.96 for tendon length. ICC for intra-rater reliability of tendon CSA was also generally excellent, with ICC between 0.87 and 0.96. Inter-machine reliability was excellent, with ICC of 0.91-0.98 for tendon CSA and 0.96-0.98 for tendon length. Bland-Altman plots were constructed to measure validity and demonstrated a mean difference between sonographers of 0.03 mm(2) for CSA measurements and 0.2 mm for tendon length. Using well-defined scanning protocols, a novice and an experienced musculoskeletal sonographer attained high levels of inter-rater agreement, with similarly excellent results for intra-rater and inter-machine reliability. To our knowledge, this study is the first to report inter-machine reliability in the setting of quantitative musculoskeletal ultrasound. Copyright © 2013
Dichter, Martin Nikolaus; Schwab, Christian G G; Meyer, Gabriele; Bartholomeyczik, Sabine; Dortmann, Olga; Halek, Margareta
2014-05-01
Quality of life (Qol) is an increasingly used outcome measure in dementia research. The QUALIDEM is a dementia-specific and proxy-rated Qol instrument. We aimed to determine the inter-rater and intra-rater reliability in residents with dementia in German nursing homes. The QUALIDEM consists of nine subscales that were applied to a sample of 108 people with mild to severe dementia and six consecutive subscales that were applied to a sample of 53 people with very severe dementia. The proxy raters were 49 registered nurses and nursing assistants. Inter-rater and intra-rater reliability scores were calculated on the subscale and item level. None of the QUALIDEM subscales showed strong inter-rater reliability based on the single-measure Intra-Class Correlation Coefficient (ICC) for absolute agreement ≥ 0.70. Based on the average-measure ICC for four raters, eight subscales for people with mild to severe dementia (care relationship, positive affect, negative affect, restless tense behavior, social relations, social isolation, feeling at home and having something to do) and five subscales for very severe dementia (care relationship, negative affect, restless tense behavior, social relations and social isolation) yielded a strong inter-rater agreement (ICC: 0.72-0.86). All of the QUALIDEM subscales, regardless of dementia severity, showed strong intra-rater agreement. The ICC values ranged between 0.70 and 0.79 for people with mild to severe dementia and between 0.75 and 0.87 for people with very severe dementia. This study demonstrated insufficient inter-rater reliability and sufficient intra-rater reliability for all subscales of both versions of the German QUALIDEM. The degree of inter-rater reliability can be improved by collaborative Qol rating by more than one nurse. The development of a measurement manual with accurate item definitions and a standardized education program for proxy raters is recommended.
Measuring the Impact of Rater Negotiation in Writing Performance Assessment
ERIC Educational Resources Information Center
Trace, Jonathan; Janssen, Gerriet; Meier, Valerie
2017-01-01
Previous research in second language writing has shown that when scoring performance assessments even trained raters can exhibit significant differences in severity. When raters disagree, using discussion to try to reach a consensus is one popular form of score resolution, particularly in contexts with limited resources, as it does not require…
Measuring the Impact of Rater Negotiation in Writing Performance Assessment
ERIC Educational Resources Information Center
Trace, Jonathan; Janssen, Gerriet; Meier, Valerie
2017-01-01
Previous research in second language writing has shown that when scoring performance assessments even trained raters can exhibit significant differences in severity. When raters disagree, using discussion to try to reach a consensus is one popular form of score resolution, particularly in contexts with limited resources, as it does not require…
Inter-Rater Variability as Mutual Disagreement: Identifying Raters' Divergent Points of View
ERIC Educational Resources Information Center
Gingerich, Andrea; Ramlo, Susan E.; van der Vleuten, Cees P. M.; Eva, Kevin W.; Regehr, Glenn
2017-01-01
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is prevalent. The resulting "idiosyncratic rater variance" is considered to be unusable error of measurement in psychometric models and is a threat to the defensibility of our assessments. Prior studies of inter-rater variation in clinical…
Intra and inter-rater reliability study of pelvic floor muscle dynamometric measurements
Martinho, Natalia M.; Marques, Joseane; Silva, Valéria R.; Silva, Silvia L. A.; Carvalho, Leonardo C.; Botelho, Simone
2015-01-01
OBJECTIVE: The aim of this study was to evaluate the intra and inter-rater reliability of pelvic floor muscle (PFM) dynamometric measurements for maximum and average strengths, as well as endurance. METHOD: A convenience sample of 18 nulliparous women, without any urogynecological complaints, aged between 19 and 31 (mean age of 25.4±3.9) participated in this study. They were evaluated using a pelvic floor dynamometer based on load cell technology. The dynamometric evaluations were repeated in three successive sessions: two on the same day with a rest period of 30 minutes between them, and the third on the following day. All participants were evaluated twice in each session; first by examiner 1 followed by examiner 2. The vaginal dynamometry data were analyzed using three parameters: maximum strength, average strength, and endurance. The Intraclass Correlation Coefficient (ICC) was applied to estimate the PFM dynamometric measurement reliability, considering a good level as being above 0.75. RESULTS: The intra and inter-raters' analyses showed good reliability for maximum strength (ICCintra-rater1=0.96, ICCintra-rater2=0.95, and ICCinter-rater=0.96), average strength (ICCintra-rater1=0.96, ICCintra-rater2=0.94, and ICCinter-rater=0.97), and endurance (ICCintra-rater1=0.88, ICCintra-rater2=0.86, and ICCinter-rater=0.92) dynamometric measurements. CONCLUSIONS: The PFM dynamometric measurements showed good intra- and inter-rater reliability for maximum strength, average strength and endurance, which demonstrates that this is a reliable device that can be used in clinical practice. PMID:25993624
A Simulation Study of Rater Agreement Measures with 2x2 Contingency Tables
ERIC Educational Resources Information Center
Ato, Manuel; Lopez, Juan Jose; Benavente, Ana
2011-01-01
A comparison between six rater agreement measures obtained using three different approaches was achieved by means of a simulation study. Rater coefficients suggested by Bennet's [sigma] (1954), Scott's [pi] (1955), Cohen's [kappa] (1960) and Gwet's [gamma] (2008) were selected to represent the classical, descriptive approach, [alpha] agreement…
Approximate measurement invariance in cross-classified rater-mediated assessments.
Kelcey, Ben; McGinn, Dan; Hill, Heather
2014-01-01
An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity.
Approximate measurement invariance in cross-classified rater-mediated assessments
Kelcey, Ben; McGinn, Dan; Hill, Heather
2014-01-01
An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity. PMID:25566145
Measurement Error. For Good Measure....
ERIC Educational Resources Information Center
Johnson, Stephen; Dulaney, Chuck; Banks, Karen
No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…
Rotundo, Roberto; Nieri, Michele; Bonaccini, Daniele; Mori, Massimiliano; Lamberti, Elena; Massironi, Domenico; Giachetti, Luca; Franchi, Lorenzo; Venezia, Piero; Cavalcanti, Raffaele; Bondi, Elena; Farneti, Mauro; Pinchi, Vilma; Buti, Jacopo
2015-01-01
To propose a method to measure the esthetics of the smile and to report its validation by means of an intra-rater and inter-rater agreement analysis. Ten variables were chosen as determinants for the esthetics of a smile: smile line and facial midline, tooth alignment, tooth deformity, tooth dischromy, gingival dischromy, gingival recession, gingival excess, gingival scars and diastema/missing papillae. One examiner consecutively selected seventy smile pictures, which were in the frontal view. Ten examiners, with different levels of clinical experience and specialties, applied the proposed assessment method twice on the selected pictures, independently and blindly. Intraclass correlation coefficient (ICC) and Fleiss' kappa) statistics were performed to analyse the intra-rater and inter-rater agreement. Considering the cumulative assessment of the Smile Esthetic Index (SEI), the ICC value for the inter-rater agreement of the 10 examiners was 0.62 (95% CI: 0.51 to 0.72), representing a substantial agreement. Intra-rater agreement ranged from 0.86 to 0.99. Inter-rater agreement (Fleiss' kappa statistics) calculated for each variable ranged from 0.17 to 0.75. The SEI was a reproducible method, to assess the esthetic component of the smile, useful for the diagnostic phase and for setting appropriate treatment plans.
The effect of rater severity on person ability measure: a Rasch model analysis.
Lunz, M E; Stahl, J A
1993-04-01
This paper presents a method for analyzing oral examinations with an extended, many-faceted Rasch model that calibrates medical specialty candidates, protocols, and raters. Significant variance was found among protocol difficulties and rater severities. When candidates' raw scores were compared with calibrated measures corrected for the bias caused by the particular protocols and raters encountered, variation between candidate scores and measures were observed. The data were found to fit the Rasch model well enough to be suitable for making measurement on oral examinations more objective as well as providing specific feedback to oral examination raters. In this example a medical oral examination was used; however, the techniques are applicable to any situation in which trained professionals rate candidate or patient performances. For occupational therapists, potential applications include evaluation of a student's fieldwork performance or observation of a patient's task performance.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Montgomery, Robert A; Hresko, M Timothy; Kalish, Leslie A; Gold, Meryl; Li, Ying; Haus, Brian; Glotzbecker, Michael; Berthonnaud, Eric
2013-11-01
Reliability analysis. To determine the intra-rater and inter-rater reliability of common sagittal spinopelvic measurements from Digital Imaging and Communications in Medicine images on a commercial Picture Archiving and Communication system for patients with developmental spondylolisthesis. Computer-aided analysis of digital radiographs has been used in research protocols to define anatomic and positional characteristics of developmental spondylolisthesis. Previous studies have shown poor reliability and weak correlations of manual measurements used in clinical practice with research measurements, which limit the clinical value of prior research. Five raters of varying experience measured lateral spinopelvic images of 30 patients with developmental spondylolisthesis. Measurements were repeated after 1 week. Intra-rater and inter-rater reliabilities for each measurement were determined. Measurements were compared with those obtained from a computer-based image enhancement research system. Continuous variables were assessed by analysis of variance, whereas kappa statistics were determined for categorical variables. Excellent intraclass correlations (ICC)s were obtained for all radiographic measurements based on linear values (slip ratio and C7 balance) as well as pelvic tilt angle. Angular measurements had good to excellent ICC but were weaker when the sacral plate was involved. There was poor agreement with classification of sacral doming. Some measurements had reduced reliability in the images with evidence of doming. Excellent ICCs were found with measurements of from Digital Imaging and Communications in Medicine images using commercial Picture Archiving and Communication System tools. Sacral doming affected the reliability. A radiographic classification of spondylolisthesis will be most reliable when based on slip ratio, C7 balance, and pelvic tilt. Copyright © 2013 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.
Inter-rater reliability of measures to characterize the tobacco retail environment in Mexico.
Hall, Marissa G; Kollath-Cattano, Christy; Reynales-Shigematsu, Luz Myriam; Thrasher, James F
2015-01-01
To evaluate the inter-rater reliability of a data collection instrument to assess the tobacco retail environment in Mexico, after major marketing regulations were implemented. In 2013, two data collectors independently evaluated 21 stores in two census tracts, through a data collection instrument that assessed the presence of price promotions, whether single cigarettes were sold, the number of visible advertisements, the presence of signage prohibiting the sale of cigarettes to minors, and characteristics of cigarette pack displays. We evaluated the inter-rater reliability of the collected data, through the calculation of metrics such as intraclass correlation coefficient, percent agreement, Cohen's kappa and Krippendorff's alpha. Most measures demonstrated substantial or perfect inter-rater reliability. Our results indicate the potential utility of the data collection instrument for future point-of-sale research.
Analysis of Rater Severity on Written Expression Exam Using Many Faceted Rasch Measurement
ERIC Educational Resources Information Center
Prieto, Gerardo; Nieto, Eloísa
2014-01-01
This paper describes how a Many Faceted Rasch Measurement (MFRM) approach can be applied to performance assessment focusing on rater analysis. The article provides an introduction to MFRM, a description of MFRM analysis procedures, and an example to illustrate how to examine the effects of various sources of variability on test takers' performance…
Resheff, Yehezkel S; Rotics, Shay; Harel, Roi; Spiegel, Orr; Nathan, Ran
2014-01-01
The study of animal movement is experiencing rapid progress in recent years, forcefully driven by technological advancement. Biologgers with Acceleration (ACC) recordings are becoming increasingly popular in the fields of animal behavior and movement ecology, for estimating energy expenditure and identifying behavior, with prospects for other potential uses as well. Supervised learning of behavioral modes from acceleration data has shown promising results in many species, and for a diverse range of behaviors. However, broad implementation of this technique in movement ecology research has been limited due to technical difficulties and complicated analysis, deterring many practitioners from applying this approach. This highlights the need to develop a broadly applicable tool for classifying behavior from acceleration data. Here we present a free-access python-based web application called AcceleRater, for rapidly training, visualizing and using models for supervised learning of behavioral modes from ACC measurements. We introduce AcceleRater, and illustrate its successful application for classifying vulture behavioral modes from acceleration data obtained from free-ranging vultures. The seven models offered in the AcceleRater application achieved overall accuracy of between 77.68% (Decision Tree) and 84.84% (Artificial Neural Network), with a mean overall accuracy of 81.51% and standard deviation of 3.95%. Notably, variation in performance was larger between behavioral modes than between models. AcceleRater provides the means to identify animal behavior, offering a user-friendly tool for ACC-based behavioral annotation, which will be dynamically upgraded and maintained.
Noninvariant Measurement in Rater-Mediated Assessments of Teaching Quality
ERIC Educational Resources Information Center
Kelcey, Ben
2014-01-01
Valid and reliable measurement of teaching is essential to evaluating and improving teacher effectiveness and advancing large-scale policy-relevant research in education (Raudenbush & Sadoff, 2008). One increasingly common component of teaching evaluations is the direct observation of teachers in their classrooms. Classroom observations have…
ERIC Educational Resources Information Center
Johnson, David; VanBrackle, Lewis
2012-01-01
Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…
Bedekar, Nilima; Suryawanshi, Mayuri; Rairikar, Savita; Sancheti, Parag; Shyam, Ashok
2014-01-01
Evaluation of range of motion (ROM) is integral part of assessment of musculoskeletal system. This is required in health fitness and pathological conditions; also it is used as an objective outcome measure. Several methods are described to check spinal flexion range of motion. Different methods for measuring spine ranges have their advantages and disadvantages. Hence, a new device was introduced in this study using the method of dual inclinometer to measure lumbar spine flexion range of motion (ROM). To determine Intra and Inter-rater reliability of mobile device goniometer in measuring lumbar flexion range of motion. iPod mobile device with goniometer software was used. The part being measure i.e the back of the subject was suitably exposed. Subject was standing with feet shoulder width apart. Spinous process of second sacral vertebra S2 and T12 were located, these were used as the reference points and readings were taken. Three readings were taken for each: inter-rater reliability as well as the intra-rater reliability. Sufficient rest was given between each flexion movement. Intra-rater reliability using ICC was r=0.920 and inter-rater r=0.812 at CI 95%. Validity r=0.95. Mobile device goniometer has high intra-rater reliability. The inter-rater reliability was moderate. This device can be used to assess range of motion of spine flexion, representing uni-planar movement.
Wendelken, Martin E; Berg, William T; Lichtenstein, Philip; Markowitz, Lee; Comfort, Christopher; Alvarez, Oscar M
2011-09-01
Traditional wound tracing technique consists of tracing the perimeter of the wound on clear acetate with a fine-tip marker, then placing the tracing on graph paper and counting the grids to calculate the surface area. Standard wound measurement technique for calcu- lating wound surface area (wound tracing) was compared to a new wound measurement method using digital photo-planimetry software ([DPPS], PictZar® Digital Planimetry). Two hundred wounds of varying etiologies were measured and traced by experienced exam- iners (raters). Simultaneously, digital photographs were also taken of each wound. The digital photographs were downloaded onto a PC, and using DPPS software, the wounds were measured and traced by the same examiners. Accuracy, intra- and interrater reliability of wound measurements obtained from tracings and from DPPS were studied and compared. Both accuracy and rater variability were directly related to wound size when wounds were measured and traced in the tradi- tional manner. In small (< 4 cm2), regularly shaped (round or oval) wounds, both accuracy and rater reliability was 98% and 95%, respectively. However, in larger, irregularly shaped wounds or wounds with epithelial islands, DPPS was more accurate than traditional mea- suring (3.9% vs. 16.2% [average error]). The mean inter-rater reliabil- ity score was 94% for DPPS and 84% for traditional measuring. The mean intrarater reliability score was 98.3% for DPPS and 89.3% for traditional measuring. In contrast to traditional measurements, DPPS may provide a more objective assessment since it can be done by a technician who is blinded to the treatment plan. Planimetry of digital photographs allows for a closer examination (zoom) of the wound and better visibility of advancing epithelium. Measurements of wounds performed on digital photographs using planimetry software were simple and convenient. It was more accurate, more objective, and resulted in better correlation within and
Surface temperature measurement errors
Keltner, N.R.; Beck, J.V.
1983-05-01
Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.
Awatani, Takenori; Mori, Seigo; Shinohara, Junji; Koshiba, Hiroya; Nariai, Miki; Tatsumi, Yasutaka; Nagata, Akinori; Morikita, Ikuhiro
2016-03-01
[Purpose] The purpose of present study was to establish the same-session and between-day intra-rater reliability of measurements of extensor strength in the maximum abducted position (MABP) using hand-held dynamometer (HHD). [Subjects] Thirteen healthy volunteers (10 male, 3 female; mean ± SD: age 19.8 ± 0.8 y) participated in the study. [Methods] Participants in the prone position with maximum abduction of shoulder were instructed to hold the contraction against the ground reaction force, and peak isometric force was recorded using the HHD on the floor. Participants performed maximum isometric contractions lasting 3 s, with 3 trials in one session. Between-day measurements were performed in 2 sessions separated by a 1-week interval. Intra-rater reliability was determined using intraclass correlation coefficients (ICC). Systematic errors were assessed using Bland-Altman analysis for between-day data. [Results] ICC values for same-session data and between-day data were found to be "almost perfect". Systematic errors not existed and only random error existed. [Conclusion] The measurement method used in this study can easily control for experimental conditions and allow precise measurement because the lack of stabilization and the impact of tester strength are removed. Thus, extensor strength in MABP measurement is beneficial for muscle strength assessment.
Measurement error models with interactions.
Midthune, Douglas; Carroll, Raymond J; Freedman, Laurence S; Kipnis, Victor
2016-04-01
An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (WW ) is a linear function of the unobserved true covariate (X) plus other covariates (Z) in the regression model. In this paper, we consider models for W that include interactions between X and Z. We derive the conditional distribution of X given W and Z and use it to extend the method of regression calibration to this class of measurement error models. We apply the model to dietary data and test whether self-reported dietary intake includes an interaction between true intake and body mass index. We also perform simulations to compare the model to simpler approximate calibration models. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Esser, Patrick; Dawes, Helen; Collett, Johnny; Feltham, Max G; Howells, Ken
2012-03-30
Walking models driven by centre of mass (CoM) data obtained from inertial measurement units (IMU) or optical motion capture systems (OMCS) can be used to objectively measure gait. However current models have only been validated within typical developed adults (TDA). The purpose of this study was to compare the projected CoM movement within Parkinson's disease (PD) measured by an IMU with data collected from an OMCS after which spatio-temporal gait measures were derived using an inverted pendulum model. The inter-rater reliability of spatio-temporal parameters was explored between expert researchers and clinicians using the IMU processed data. Participants walked 10 m with an IMU attached over their centre of mass which was simultaneously recorded by an OMCS. Data was collected on two occasions, each by an expert researcher and clinician. Ten people with PD showed no difference (p=0.13) for vertical, translatory acceleration, velocity and relative position of the projected centre of mass between IMU and OMCS data. Furthermore no difference (p=0.18) was found for the derived step time, stride length and walking speed for people with PD. Measurements of step time (p=0.299), stride length (p=0.883) and walking speed (p=0.751) did not differ between experts and clinicians. There was good inter-rater reliability for these parameters (ICC3.1=0.979, ICC3.1=0.958 and ICC3.1=0.978, respectively). The findings are encouraging and support the use of IMUs by clinicians to measure CoM movement in people with PD. Copyright Â© 2012 Elsevier B.V. All rights reserved.
Measuring the Pain Area: An Intra- and Inter-Rater Reliability Study Using Image Analysis Software.
Dos Reis, Felipe Jose Jandre; de Barros E Silva, Veronica; de Lucena, Raphaela Nunes; Mendes Cardoso, Bruno Alexandre; Nogueira, Leandro Calazans
2016-01-01
Pain drawings have frequently been used for clinical information and research. The aim of this study was to investigate intra- and inter-rater reliability of area measurements performed on pain drawings. Our secondary objective was to verify the reliability when using computers with different screen sizes, both with and without mouse hardware. Pain drawings were completed by patients with chronic neck pain or neck-shoulder-arm pain. Four independent examiners participated in the study. Examiners A and B used the same computer with a 16-inch screen and wired mouse hardware. Examiner C used a notebook with a 16-inch screen and no mouse hardware, and Examiner D used a computer with an 11.6-inch screen and a wireless mouse. Image measurements were obtained using GIMP and NIH ImageJ computer programs. The length of all the images was measured using GIMP software to a set scale in ImageJ. Thus, each marked area was encircled and the total surface area (cm(2) ) was calculated for each pain drawing measurement. A total of 117 areas were identified and 52 pain drawings were analyzed. The intrarater reliability between all examiners was high (ICC = 0.989). The inter-rater reliability was also high. No significant differences were observed when using different screen sizes or when using or not using the mouse hardware. This suggests that the precision of these measurements is acceptable for the use of this method as a measurement tool in clinical practice and research. © 2014 World Institute of Pain.
Zapf, Antonia; Castell, Stefanie; Morawietz, Lars; Karch, André
2016-08-05
Reliability of measurements is a prerequisite of medical research. For nominal data, Fleiss' kappa (in the following labelled as Fleiss' K) and Krippendorff's alpha provide the highest flexibility of the available reliability measures with respect to number of raters and categories. Our aim was to investigate which measures and which confidence intervals provide the best statistical properties for the assessment of inter-rater reliability in different situations. We performed a large simulation study to investigate the precision of the estimates for Fleiss' K and Krippendorff's alpha and to determine the empirical coverage probability of the corresponding confidence intervals (asymptotic for Fleiss' K and bootstrap for both measures). Furthermore, we compared measures and confidence intervals in a real world case study. Point estimates of Fleiss' K and Krippendorff's alpha did not differ from each other in all scenarios. In the case of missing data (completely at random), Krippendorff's alpha provided stable estimates, while the complete case analysis approach for Fleiss' K led to biased estimates. For shifted null hypotheses, the coverage probability of the asymptotic confidence interval for Fleiss' K was low, while the bootstrap confidence intervals for both measures provided a coverage probability close to the theoretical one. Fleiss' K and Krippendorff's alpha with bootstrap confidence intervals are equally suitable for the analysis of reliability of complete nominal data. The asymptotic confidence interval for Fleiss' K should not be used. In the case of missing data or data or higher than nominal order, Krippendorff's alpha is recommended. Together with this article, we provide an R-script for calculating Fleiss' K and Krippendorff's alpha and their corresponding bootstrap confidence intervals.
2013-01-01
Background There is a shortage of agreement studies relevant for measuring changes over time in lumbar intervertebral disc structures. The objectives of this study were: 1) to develop a method for measurement of intervertebral disc height, anterior and posterior disc material and dural sac diameter using MRI, 2) to evaluate intra- and inter-rater agreement and reliability for the measurements included, and 3) to identify factors compromising agreement. Methods Measurements were performed on MRIs from 16 people with and 16 without lumbar disc herniation, purposefully chosen to represent all possible disc contours among participants in a general population study cohort. Using the new method, MRIs were measured twice by one rater and once by a second rater. Agreement on the sagittal start- and end-slice was evaluated using weighted Kappa. Length and volume measurements were conducted on available slices between intervertebral foramens, and cross-sectional areas (CSA) were calculated from length measurements and slice thickness. Results were reported as Bland and Altman’s limits of agreement (LOA) and intraclass correlation coefficients (ICC). Results Weighted Kappa (Kw (95% CI)) for start- and end-slice were: intra-: 0.82(0.60;0.97) & 0.71(0.43;0.93); inter-rater: 0.56(0.29;0.78) & 0.60(0.35;0.81). For length measurements, LOA ranged from [−1.0;1.0] mm to [−2.0;2.3] mm for intra-; and from [−1.1; 1.4] mm to [−2.6;2.0] mm for inter-rater. For volume measurements, LOA ranged from [−293;199] mm3 to [−582;382] mm3 for intra-, and from [−17;801] mm3 to [−450;713] mm3 for inter-rater. For CSAs, LOA ranged between [−21.3; 18.8] mm2 and [−31.2; 43.7] mm2 for intra-, and between [−10.8; 16.4] mm2 and [−64.6; 27.1] mm2 for inter-rater. In general, LOA as a proportion of mean values gradually decreased with increasing size of the measured structures. Agreement was compromised by difficulties in identifying the vertebral corners, the anterior and
ERIC Educational Resources Information Center
Hurtz, Gregory M.; Jones, J. Patrick
2009-01-01
Standard setting methods such as the Angoff method rely on judgments of item characteristics; item response theory empirically estimates item characteristics and displays them in item characteristic curves (ICCs). This study evaluated several indexes of rater fit to ICCs as a method for judging rater accuracy in their estimates of expected item…
van Trijffel, Emiel; van de Pol, Rachel J; Oostendorp, Rob Ab; Lucas, Cees
2010-01-01
What is the inter-rater reliability for measurements of passive physiological or accessory movements in lower extremity joints? Systematic review of studies of inter-rater reliability. Individuals with and without lower extremity disorders. Range of motion and end-feel using methods feasible in daily practice. 17 studies were included of which 5 demonstrated acceptable inter-rater reliability. Reliability of measurements of physiological range of motion ranged from Kappa -0.02 for measuring knee extension using a goniometer to ICC 0.97 for measuring knee flexion using vision. Measuring range of knee flexion consistently yielded acceptable reliability using either vision or instruments. Measurements of end-feel were unreliable for all hip and knee movements. Two studies satisfied all criteria for internal validity while reporting acceptable reliability for measuring physiological range of knee flexion and extension. Overall,however, methodological quality of included studies was poor. Inter-rater reliability of measurement of passive movements in lower extremity joints is generally low. We provide specific recommendations for the conduct and reporting of future research. Awaiting new evidence, clinicians should be cautious when relying on results from measurements of passive movements in joints for making decisions about patients with lower extremity disorders.
Participant, Rater, and Computer Measures of Coherence in Posttraumatic Stress Disorder
Rubin, David C.; Deffler, Samantha A.; Ogle, Christin M.; Dowell, Nia M.; Graesser, Arthur C.; Beckham, Jean C.
2015-01-01
We examined the coherence of trauma memories in a trauma-exposed community sample of 30 adults with and 30 without PTSD. The groups had similar categories of traumas and were matched on multiple factors that could affect the coherence of memories. We compared the transcribed oral trauma memories of participants with their most important and most positive memories. A comprehensive set of 28 measures of coherence including 3 ratings by the participants, 7 ratings by outside raters, and 18 computer-scored measures, provided a variety of approaches to defining and measuring coherence. A MANOVA indicated differences in coherence among the trauma, important, and positive memories, but not between the diagnostic groups or their interaction with these memory types. Most differences were small in magnitude; in some cases, the trauma memories were more, rather than less, coherent than the control memories. Where differences existed, the results agreed with the existing literature, suggesting that factors other than the incoherence of trauma memories are most likely to be central to the maintenance of PTSD and thus its treatment. PMID:26523945
Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.
Intra- and inter-rater reliability of digital image analysis for skin color measurement
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-01-01
Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208
Intra- and inter-rater reliability of digital image analysis for skin color measurement.
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-11-01
We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe(®) Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor(®) in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Statistical fusion of surface labels provided by multiple raters
NASA Astrophysics Data System (ADS)
Bogovic, John A.; Landman, Bennett A.; Bazin, Pierre-Louis; Prince, Jerry L.
2010-03-01
Studies of the size and morphology of anatomical structures rely on accurate and reproducible delineation of the structures, obtained either by human raters or automatic segmentation algorithms. Measures of reproducibility and variability are vital aspects of such studies and are usually estimated using repeated scans or repeated delineations (in the case of human raters). Methods exist for simultaneously estimating the true structure and rater performance parameters from multiple segmentations and have been demonstrated on volumetric images. In this work, we extend the applicability of previous methods onto two-dimensional surfaces parameterized as triangle meshes. Label homogeneity is enforced using a Markov random field formulated with an energy that addresses the challenges introduced by the surface parameterization. The method was tested using both simulated raters and cortical gyral labels. Simulated raters are computed using a global error model as well as a novel and more realistic boundary error model. We study the impact of raters and their accuracy based on both models, and show how effectively this method estimates the true segmentation on simulated surfaces. The Markov random field formulation was shown to effectively enforce homogeneity for raters suffering from label noise. We demonstrated that our method provides substantial improvements in accuracy over single-atlas methods for all experimental conditions.
Measuring Test Measurement Error: A General Approach
ERIC Educational Resources Information Center
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2013-01-01
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Measuring Test Measurement Error: A General Approach
ERIC Educational Resources Information Center
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2013-01-01
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
ERIC Educational Resources Information Center
Murphy, Daniel L.; Beretvas, S. Natasha
2015-01-01
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
ERIC Educational Resources Information Center
Murphy, Daniel L.; Beretvas, S. Natasha
2015-01-01
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Sršen, Katja Groleger; Vidmar, Gaj; Pikl, Maša; Vrečar, Irena; Burja, Cirila; Krušec, Klavdija
2012-06-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine its content validity and inter-rater reliability. Fifty-four healthy children, 3.5-11 years old, from a mainstream swimming program participated in a content validity study. They were evaluated with SWIM and the national evaluation system of swimming abilities (classifying children into seven categories). To study the inter-rater reliability of SWIM, we included 37 children and youth from a Halliwick swimming program, aged 7-22 years, who were evaluated by two Halliwick instructors independently. The average SWIM score differed between national evaluation system categories and followed the expected order (P<0.001), whereby a ceiling effect was observed in the higher categories. High inter-rater reliability was found for all 11 SWIM items. The lowest reliability was observed for item G (sagittal rotation), although the estimates were still above 0.9. As expected, the highest reliability was observed for the total score (intraclass correlation 0.996). The validity of SWIM with respect to the national evaluation system of swimming abilities is high until the point where a swimmer is well adapted to water and already able to learn some swimming techniques. The inter-rater reliability of SWIM is very high; thus, we believe that SWIM can be used in further research and practice to follow the progress of swimmers.
Better Stability with Measurement Errors
NASA Astrophysics Data System (ADS)
Argun, Aykut; Volpe, Giovanni
2016-06-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
The Effects of Rater Training on Inter-Rater Agreement
ERIC Educational Resources Information Center
Pufpaff, Lisa A.; Clarke, Laura; Jones, Ruth E.
2015-01-01
This paper addresses the effects of rater training on the rubric-based scoring of three preservice teacher candidate performance assessments. This project sought to evaluate the consistency of ratings assigned to student learning outcome measures being used for program accreditation and to explore the need for rater training in order to increase…
The Effects of Rater Training on Inter-Rater Agreement
ERIC Educational Resources Information Center
Pufpaff, Lisa A.; Clarke, Laura; Jones, Ruth E.
2015-01-01
This paper addresses the effects of rater training on the rubric-based scoring of three preservice teacher candidate performance assessments. This project sought to evaluate the consistency of ratings assigned to student learning outcome measures being used for program accreditation and to explore the need for rater training in order to increase…
Improved Error Thresholds for Measurement-Free Error Correction
NASA Astrophysics Data System (ADS)
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Improved Error Thresholds for Measurement-Free Error Correction.
Crow, Daniel; Joynt, Robert; Saffman, M
2016-09-23
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Error analysis of tissue resistivity measurement.
Tsai, Jang-Zern; Will, James A; Hubbard-Van Stelle, Scott; Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G
2002-05-01
We identified the error sources in a system for measuring tissue resistivity at eight frequencies from 1 Hz to 1 MHz using the four-terminal method. We expressed the measured resistivity with an analytical formula containing all error terms. We conducted practical error measurements with in-vivo and bench-top experiments. We averaged errors at all frequencies for all measurements. The standard deviations of error of the quantization error of the 8-bit digital oscilloscope with voltage averaging, the nonideality of the circuit, the in-vivo motion artifact and electrical interference combined to yield an error of +/- 1.19%. The dimension error in measuring the syringe tube for measuring the reference saline resistivity added +/- 1.32% error. The estimation of the working probe constant by interpolating a set of probe constants measured in reference saline solutions added +/- 0.48% error. The difference in the current magnitudes used during the probe calibration and that during the tissue resistivity measurement caused +/- 0.14% error. Variation of the electrode spacing, alignment, and electrode surface property due to the insertion of electrodes into the tissue caused +/- 0.61% error. We combined the above errors to yield an overall standard deviation error of the measured tissue resistivity of +/- 1.96%.
2010-01-01
Background The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114). Methods 75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item. Results 88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions. Conclusions Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved. PMID:20860789
ERIC Educational Resources Information Center
Bock, Douglas G.; And Others
1984-01-01
This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)
Correlated measurement error hampers association network inference.
Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B
2014-09-01
Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow
Cools, Ann M; De Wilde, Lieven; Van Tongel, Alexander; Ceyssens, Charlotte; Ryckewaert, Robin; Cambier, Dirk C
2014-10-01
Shoulder range of motion (ROM) and strength measurements are imperative in the clinical assessment of the patient's status and progression over time. The method and type of assessment varies among clinicians and institutions. No comprehensive study to date has examined the reliability of a variety of procedures based on different testing equipment and specific patient or shoulder position. The purpose of this study was to establish absolute and relative reliability for several procedures measuring the rotational shoulder ROM and strength into internal (IR) and external (ER) rotation strength. Thirty healthy individuals (15 male, 15 female), with a mean age of 22.1 ± 1.4 years, were examined by 2 examiners who measured ROM with a goniometer and inclinometer and isometric strength with a hand-held dynamometer (HHD) in different patient and shoulder positions. Relative reliability was determined by intraclass correlation coefficients (ICC). Absolute reliability was quantified by standard error of measurement (SEM) and minimal detectable change (MDC). Systematic differences across trials or between testers, as well as differences among similar measurements under different testing circumstances, were analyzed with dependent t tests or repeated-measures analysis of variance in case of 2 or more than 2 conditions, respectively. Reliability was good to excellent for IR and ER ROM and isometric strength measurements, regardless of patient or shoulder position or equipment used (ICC, 0.85-0.99). For some of the measurements, systematic differences were found across trials or between testers. The patient's position and the equipment used resulted in different outcome measures. All procedures examined showed acceptable reliability for clinical use. However, patient position and equipment might influence the results. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Kim, Young-Suk Grace; Schatschneider, Christopher; Wanzek, Jeanne; Gatlin, Brandy; Al Otaiba, Stephanie
2017-01-01
We examined how raters and tasks influence measurement error in writing evaluation and how many raters and tasks are needed to reach a desirable level of 0.90 and 0.80 reliabilities for children in Grades 3 and 4. A total of 211 children (102 boys) were administered three tasks in narrative and expository genres, respectively, and their written…
Naessens, James M; O'Byrne, Thomas J; Johnson, Matthew G; Vansuch, Monica B; McGlone, Corey M; Huddleston, Jeanne M
2010-08-01
To determine the inter-rater reliability of the Institute for Healthcare Improvement's Global Trigger Tool (GTT) in a practice setting, and explore the value of individual triggers. Prospective assessment of application of the GTT to monthly random samples of hospitalized patients at four hospitals across three regions in the USA. Mayo Clinic campuses are in Minnesota, Arizona and Florida. A total of 1138 non-pediatric inpatients from all units across the hospital. GTT was applied to randomly selected medical records with independent assessments of two registered nurses with a physician review for confirmation. The Cohen Kappa coefficient was used as a measure of inter-rater agreement. The positive predictive value was assessed for individual triggers. Good levels of reliability were obtained between independent nurse reviewers at the case-level for both the occurrence of any trigger and the identification of an adverse event. Nurse reviewer agreement for individual triggers was much more varied. Higher agreement appears to occur among triggers that are objective and consistently recorded in selected portions of the medical record. Individual triggers also varied on their yield to detect adverse events. Cases with adverse events had significantly more triggers identified (mean 4.7) than cases with no adverse events (mean 1.8). The trigger methodology appears to be a promising approach to the measurement of patient safety. However, automated processes could make the process more efficient in identifying adverse events and has a greater potential of improving care delivery and patient 'outcomes'.
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
2012-01-01
Background Assessment of range of motion (ROM) and muscle strength is fundamental in the clinical diagnosis of hip osteoarthritis (OA) but reproducibility of these measurements has mostly involved clinicians from secondary care and has rarely reported agreement parameters. Therefore, the primary objective of the study was to determine the inter-rater reproducibility of ROM and muscle strength measurements. Furthermore, the reliability of the overall assessment of clinical hip OA was evaluated. Reporting is in accordance with proposed guidelines for the reporting of reliability and agreement studies (GRRAS). Methods In a university hospital, four blinded raters independently examined patients with unilateral hip OA; two hospital orthopaedists independently examined 48 (24 men) patients and two primary care chiropractors examined 61 patients (29 men). ROM was measured in degrees (deg.) with a standard two-arm goniometer and muscle strength in Newton (N) using a hand-held dynamometer. Reproducibility is reported as agreement and reliability between paired raters of the same profession. Agreement is reported as limits of agreement (LoA) and reliability is reported with intraclass correlation coefficients (ICC). Reliability of the overall assessment of clinical OA is reported as weighted kappa. Results Between orthopaedists, agreement for ROM ranged from LoA [-28–12 deg.] for internal rotation to [-8–13 deg.] for extension. ICC ranged between 0.53 and 0.73, highest for flexion. For muscle strength between orthopaedists, LoA ranged from [-65–47N] for external rotation to [-10 –59N] for flexion. ICC ranged between 0.52 and 0.85, highest for abduction. Between chiropractors, agreement for ROM ranged from LoA [-25–30 deg.] for internal rotation to [-13–21 deg.] for flexion. ICC ranged between 0.14 and 0.79, highest for flexion. For muscle strength between chiropractors, LoA ranged between [-80–20N] for external rotation to [-146–55N] for abduction. ICC
Measurement error in air pollution exposure assessment.
Navidi, W; Lurmann, F
1995-01-01
The exposure of an individual to an air pollutant can be assessed indirectly, with a "microenvironmental" approach, or directly with a personal sampler. Both methods of assessment are subject to measurement error, which can cause considerable bias in estimates of health effects. If the exposure estimates are unbiased and the measurement error is nondifferential, the bias in a linear model can be corrected when the variance of the measurement error is known. Unless the measurement error is quite large, estimates of health effects based on individual exposures appear to be more accurate than those based on ambient levels.
Assigning error to an M2 measurement
NASA Astrophysics Data System (ADS)
Ross, T. Sean
2006-02-01
The ISO 11146:1999 standard has been published for 6 years and set forth the proper way to measure the M2 parameter. In spite of the strong experimental guidance given by this standard and the many commercial devices based upon ISO 11146, it is still the custom to quote M2 measurements without any reference to significant figures or error estimation. To the author's knowledge, no commercial M2 measurement device includes error estimation. There exists, perhaps, a false belief that M2 numbers are high precision and of insignificant error. This paradigm causes program managers and purchasers to over-specify a beam quality parameter and researchers not to question the accuracy and precision of their M2 measurements. This paper will examine the experimental sources of error in an M2 measurement including discretization error, CCD noise, discrete filter sets, noise equivalent aperture estimation, laser fluctuation and curve fitting error. These sources of error will be explained in their experimental context and convenient formula given to properly estimate error in a given M2 measurement. This work is the result of the author's inability to find error estimation and disclosure of methods in commercial beam quality measurement devices and building an ISO 11146 compliant, computer- automated M2 measurement device and the resulting lessons learned and concepts developed.
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
Influence of measurement error on Maxwell's demon
NASA Astrophysics Data System (ADS)
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
Error margin for antenna gain measurements
NASA Technical Reports Server (NTRS)
Cable, V.
2002-01-01
The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.
van de Pol, Rachel J; van Trijffel, Emiel; Lucas, Cees
2010-01-01
What is the inter-rater reliability for measurements of passive physiological or accessory movements in upper extremity joints? Systematic review of studies of inter-rater reliability. Individuals with and without upper extremity disorders. Range of motion and end-feel using methods feasible in clinical practice. Twenty-one studies were included of which 11 demonstrated acceptable inter-rater reliability. Two studies satisfied all criteria for internal validity while reporting almost perfect reliability. Overall, the methodological quality of studies was poor. ICC ranged from 0.26 (95% CI -0.01 to 0.69) for measuring the physiological range of shoulder internal rotation using vision to 0.99 (95% CI 0.98 to 1.0) for the physiological range of finger and thumb flexion/extension using a goniometer. Measurements of physiological range of motion using instruments were more reliable than using vision. Measurements of physiological range of motion were also more reliable than measurements of end-feel or of accessory range of motion. Inter-rater reliability for the measurement of passive movements of upper extremity joints varies with the method of measurement. In order to make reliable decisions about joint restrictions in clinical practice, we recommend that clinicians measure passive physiological range of motion using goniometers or inclinometers.
Minimizing noise-temperature measurement errors
NASA Technical Reports Server (NTRS)
Stelzried, C. T.
1992-01-01
An analysis of noise-temperature measurement errors of low-noise amplifiers was performed. Results of this analysis can be used to optimize measurement schemes for minimum errors. For the cases evaluated, the effective noise temperature (Te) of a Ka-band maser can be measured most accurately by switching between an ambient and a 2-K cooled load without an isolation attenuator. A measurement accuracy of 0.3 K was obtained for this example.
Jalan, Nikita S; Daftari, Sonam S; Retharekar, Seemi S; Rairikar, Savita A; Shyam, Ashok M; Sancheti, Parag K
2015-01-01
BACKGROUND: Measurement of maximum inspiratory pressure is the most prevalent method used in clinical practice to assess the strength of the inspiratory muscles. Although there are many devices available for the assessment of inspiratory muscle strength, there is a dearth of literature describing the reliability of devices that can be used in clinical patient assessment. The capsule-sensing pressure gauge (CSPG-V) is a new tool that measures the strength of inspiratory muscles; it is easy to use, noninvasive, inexpensive and lightweight. OBJECTIVE: To test the intra- and inter-rater reliability of a CSPG-V device in healthy adults. METHODS: A cross-sectional study involving 80 adult subjects with a mean (± SD) age of 22±3 years was performed. Using simple randomization, 40 individuals (20 male, 20 female) were used for intrarater and 40 (20 male, 20 female) were used for inter-rater reliability testing of the CSPG-V device. The subjects performed three inspiratory efforts, which were sustained for at least 3 s; the best of the three readings was used for intra- and inter-rater comparison. The intra- and inter-rater reliability were calculated using intraclass correlation coefficients (ICCs). RESULTS: The intrarater reliability ICC was 0.962 and the inter-rater reliability ICC was 0.922. CONCLUSION: Results of the present study suggest that maximum inspiratory pressure measured using a CSPG-V device has excellent intraand inter-rater reliability, and can be used as a diagnostic and prognostic tool in patients with respiratory muscle impairment. PMID:26089737
Error latency measurements in symbolic architectures
NASA Technical Reports Server (NTRS)
Young, L. T.; Iyer, R. K.
1991-01-01
Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.
Prediction with measurement errors in finite populations
Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San
2011-01-01
We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors. PMID:22162621
Tuvblad, Catherine; Bezdjian, Serena; Raine, Adrian; Baker, Laura A.
2014-01-01
No study has yet examined the genetic and environmental influences on psychopathic personality across different raters and method of assessment. Participants were part of a community sample of male and female twins born between 1990 and 1995. The Child Psychopathy Scale (CPS) and the Antisocial Process Screening Device (APSD) were administered to the twins and their parents when the twins were 14 to 15 years old. The Psychopathy Checklist: Youth Version (PCL:YV) was administered and scored by trained testers. Results showed that a one-factor common pathway model was the best fit for the data. Genetic influences explained 69% of the variance in the latent psychopathic personality factor, while non-shared environmental influences explained 31%. Measurement-specific genetic effects accounted for between 9% and 35% of the total variance in each of the measures, except for PCL:YV where all genetic influences were in common with the other measures. Measure-specific non-shared environmental influences were found for all measures, explaining between 17% and 56% of the variance. These findings provide further evidence of the heritability in psychopathic personality among adolescents, although these effects vary across the way in which these traits are measured, in terms of both informant and instrument used. PMID:24796343
Measurement Error with Different Computer Vision Techniques
NASA Astrophysics Data System (ADS)
Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.
2017-09-01
The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
ERIC Educational Resources Information Center
Douglas, Scott Roy
2015-01-01
Independent confirmation that vocabulary in use unfolds across levels of performance as expected can contribute to a more complete understanding of validity in standardized English language tests. This study examined the relationship between Lexical Frequency Profiling (LFP) measures and rater judgements of test-takers' overall levels of…
Measuring Cyclic Error in Laser Heterodyne Interferometers
NASA Technical Reports Server (NTRS)
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Gear Transmission Error Measurement System Made Operational
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2002-01-01
A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.
Errors of measurement by laser goniometer
NASA Astrophysics Data System (ADS)
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
Guo, Tong; Xiang, Yu-Tao; Xiao, Le; Hu, Chang-Qing; Chiu, Helen F K; Ungvari, Gabor S; Correll, Christoph U; Lai, Kelly Y C; Feng, Lei; Geng, Ying; Feng, Yuan; Wang, Gang
2015-10-01
The authors compared measurement-based care with standard treatment in major depression. Outpatients with moderate to severe major depression were consecutively randomized to 24 weeks of either measurement-based care (guideline- and rating scale-based decisions; N=61), or standard treatment (clinicians' choice decisions; N=59). Pharmacotherapy was restricted to paroxetine (20-60 mg/day) or mirtazapine (15-45 mg/day) in both groups. Depressive symptoms were measured with the Hamilton Depression Rating Scale (HAM-D) and the Quick Inventory of Depressive Symptomatology-Self-Report (QIDS-SR). Time to response (a decrease of at least 50% in HAM-D score) and remission (a HAM-D score of 7 or less) were the primary endpoints. Outcomes were evaluated by raters blind to study protocol and treatment. Significantly more patients in the measurement-based care group than in the standard treatment group achieved response (86.9% compared with 62.7%) and remission (73.8% compared with 28.8%). Similarly, time to response and remission were significantly shorter with measurement-based care (for response, 5.6 weeks compared with 11.6 weeks, and for remission, 10.2 weeks compared with 19.2 weeks). HAM-D scores decreased significantly in both groups, but the reduction was significantly larger for the measurement-based care group (-17.8 compared with -13.6). The measurement-based care group had significantly more treatment adjustments (44 compared with 23) and higher antidepressant dosages from week 2 to week 24. Rates of study discontinuation, adverse effects, and concomitant medications did not differ between groups. The results demonstrate the feasibility and effectiveness of measurement-based care for outpatients with moderate to severe major depression, suggesting that this approach can be incorporated in the clinical care of patients with major depression.
Measurement process error determination and control
Everhart, J.
1992-01-01
Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960's by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.
Measurement process error determination and control
Everhart, J.
1992-11-01
Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960`s by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.
Measurement error analysis of taxi meter
NASA Astrophysics Data System (ADS)
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
ERIC Educational Resources Information Center
Kahraman, Nilufer; Brown, Crystal B.
2015-01-01
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
Intertester agreement in refractive error measurements.
Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang
2013-10-01
To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.
ERIC Educational Resources Information Center
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April
2014-01-01
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
Honing in on the Social Phenotype in Williams Syndrome Using Multiple Measures and Multiple Raters
ERIC Educational Resources Information Center
Klein-Tasman, Bonita P.; Li-Barber, Kirsten T.; Magargee, Erin T.
2011-01-01
The behavioral phenotype of Williams syndrome (WS) is characterized by difficulties with establishment and maintenance of friendships despite high levels of interest in social interaction. Here, parents and teachers rated 84 children with WS ages 4-16 years using two commonly-used measures assessing aspects of social functioning: the Social Skills…
Honing in on the Social Phenotype in Williams Syndrome Using Multiple Measures and Multiple Raters
ERIC Educational Resources Information Center
Klein-Tasman, Bonita P.; Li-Barber, Kirsten T.; Magargee, Erin T.
2011-01-01
The behavioral phenotype of Williams syndrome (WS) is characterized by difficulties with establishment and maintenance of friendships despite high levels of interest in social interaction. Here, parents and teachers rated 84 children with WS ages 4-16 years using two commonly-used measures assessing aspects of social functioning: the Social Skills…
Comparison of Models and Indices for Detecting Rater Centrality.
Wolfe, Edward W; Song, Tian
2015-01-01
To date, much of the research concerning rater effects has focused on rater severity/leniency. Consequently, other potentially important rater effects have largely ignored by those conducting operational scoring projects. This simulation study compares four rater centrality indices (rater fit, residual-expected correlations, rater slope, and rater threshold variance) in terms of their Type I and Type II error rates under varying levels of centrality magnitude, centrality pervasiveness, and rating scale construction when each of four latent trait models is fitted to the simulated data (Rasch rating scale and partial credit models and the generalized rating scale and partial credit models). Results indicate that the residual-expected correlation may be most appropriately sensitive to rater centrality under most conditions.
Neutron multiplication error in TRU waste measurements
Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Honing in on the Social Phenotype in Williams Syndrome Using Multiple Measures and Multiple Raters
Li-Barber, Kirsten T.; Magargee, Erin T.
2010-01-01
The behavioral phenotype of Williams syndrome (WS) is characterized by difficulties with establishment and maintenance of friendships despite high levels of interest in social interaction. Here, parents and teachers rated 84 children with WS ages 4–16 years using two commonly-used measures assessing aspects of social functioning: the Social Skills Rating System and the Social Responsiveness Scale. Mean prosocial functioning fell in the low average to average range, whereas social reciprocity was perceived to be an area of significant difficulty for many children. Concordance between parent and teacher ratings was high. Patterns of social functioning are discussed. Findings highlight the importance of parsing the construct of social skills to gain a nuanced understanding of the social phenotype in WS. PMID:20614173
Awatani, Takenori; Morikita, Ikuhiro; Shinohara, Junji; Mori, Seigo; Nariai, Miki; Tatsumi, Yasutaka; Nagata, Akinori; Koshiba, Hiroya
2016-11-01
[Purpose] The purpose of the present study was to establish the intra- and inter-rater reliability of measurement of extensor strength in the maximum shoulder abducted position and internal rotator strength in the 90° abducted and the 90° external rotated position using a hand-held dynamometer. [Subjects and Methods] Twelve healthy volunteers (12 male; mean ± SD: age 19.0 ± 1.1 years) participated in the study. The examiners were two students who had nonclinical experience with a hand-held dynamometer measurement. The examiners and participants were blinded to measurement results by the recorder. Participants in the prone position were instructed to hold the contraction against the ground reaction force, and peak isometric force was recorded using the hand-held dynamometer on the floor. Reliability was determined using intraclass correlation coefficients. [Results] The intra- and inter-rater reliability data were found to be "almost perfect". [Conclusion] This study investigated intra- and inter-rater reliability and reveald high reliability. Thus, the measurement method used in the present study can evaluate muscle strength by a simple measurement technique.
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J
2014-11-10
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Algorithmic Error Correction of Impedance Measuring Sensors
Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira
2009-01-01
This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
BurnCase 3D software validation study: Burn size measurement accuracy and inter-rater reliability.
Parvizi, Daryousch; Giretzlehner, Michael; Wurzer, Paul; Klein, Limor Dinur; Shoham, Yaron; Bohanon, Fredrick J; Haller, Herbert L; Tuca, Alexandru; Branski, Ludwik K; Lumenta, David B; Herndon, David N; Kamolz, Lars-P
2016-03-01
The aim of this study was to compare the accuracy of burn size estimation using the computer-assisted software BurnCase 3D (RISC Software GmbH, Hagenberg, Austria) with that using a 2D scan, considered to be the actual burn size. Thirty artificial burn areas were pre planned and prepared on three mannequins (one child, one female, and one male). Five trained physicians (raters) were asked to assess the size of all wound areas using BurnCase 3D software. The results were then compared with the real wound areas, as determined by 2D planimetry imaging. To examine inter-rater reliability, we performed an intraclass correlation analysis with a 95% confidence interval. The mean wound area estimations of the five raters using BurnCase 3D were in total 20.7±0.9% for the child, 27.2±1.5% for the female and 16.5±0.1% for the male mannequin. Our analysis showed relative overestimations of 0.4%, 2.8% and 1.5% for the child, female and male mannequins respectively, compared to the 2D scan. The intraclass correlation between the single raters for mean percentage of the artificial burn areas was 98.6%. There was also a high intraclass correlation between the single raters and the 2D Scan visible. BurnCase 3D is a valid and reliable tool for the determination of total body surface area burned in standard models. Further clinical studies including different pediatric and overweight adult mannequins are warranted. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Connors, Brenda L.; Rende, Richard; Colton, Timothy J.
2014-01-01
The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns. PMID:24999336
Target detection using image motion error measure
NASA Astrophysics Data System (ADS)
Wayman, James L.; Libert, John M.; Tsao, Thomas R.
1993-09-01
The spatio-temporal constraint equation for computation of the optical flow holds only over local spatio-temporal regions where motion is translational with constant velocity or over non- moving background regions where the image velocity is zero. Where the expression is true, it is possible to estimate the image motion vector of local image regions by minimizing the squared error over the local region. The expression is not true in the moving boundaries of moving objects over a stationary background, over regions with multiple moving objects, or over objects not in purely translational motion. Under these conditions, the accurate computation of motion vectors is not possible using this method. However, the error squared term, itself, may be used as a moving target indicator able to segment moving targets from noise and background clutter. This paper proposes and assesses the feasibility of using the error measure to detect moving boundaries in high noise images. We assess the performance of this error-squared measure in localizing object motion in high noise environments for two filtering functions G(x,y): the Gaussian function and the Gabor function.
Application of Uniform Measurement Error Distribution
2016-03-18
should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of...Uniform Measurement Error Distribution 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Ghazarians, Alan; Jackson, Dennis...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME
Schless, Simon-Henri; Desloovere, Kaat; Aertbeliën, Erwin; Molenaers, Guy; Huenaerts, Catherine; Bar-On, Lynn
2015-01-01
Aim Despite the impact of spasticity, there is a lack of objective, clinically reliable and valid tools for its assessment. This study aims to evaluate the reliability of various performance- and spasticity-related parameters collected with a manually controlled instrumented spasticity assessment in four lower limb muscles in children with cerebral palsy (CP). Method The lateral gastrocnemius, medial hamstrings, rectus femoris and hip adductors of 12 children with spastic CP (12.8 years, ±4.13 years, bilateral/unilateral involvement n=7/5) were passively stretched in the sagittal plane at incremental velocities. Muscle activity, joint motion, and torque were synchronously recorded using electromyography, inertial sensors, and a force/torque load-cell. Reliability was assessed on three levels: (1) intra- and (2) inter-rater within session, and (3) intra-rater between session. Results Parameters were found to be reliable in all three analyses, with 90% containing intra-class correlation coefficients >0.6, and 70% of standard error of measurement values <20% of the mean values. The most reliable analysis was intra-rater within session, followed by intra-rater between session, and then inter-rater within session. The Adds evaluation had a slightly lower level of reliability than that of the other muscles. Conclusions Limited intrinsic/extrinsic errors were introduced by repeated stretch repetitions. The parameters were more reliable when the same rater, rather than different raters performed the evaluation. Standardisation and training should be further improved to reduce extrinsic error when different raters perform the measurement. Errors were also muscle specific, or related to the measurement set-up. They need to be accounted for, in particular when assessing pre-post interventions or longitudinal follow-up. The parameters of the instrumented spasticity assessment demonstrate a wide range of applications for both research and clinical environments in the
Heid, I M; Küchenhoff, H; Miles, J; Kreienbrock, L; Wichmann, H E
2004-09-01
Measurement error in exposure assessment is unavoidable. Statistical methods to correct for such errors rely upon a valid error model, particularly regarding the classification of classical and Berkson error, the structure and the size of the error. We provide a detailed list of sources of error in residential radon exposure assessment, stressing the importance of (a) the differentiation between classical and Berkson error and (b) the clear definitions of predictors and operationally defined predictors using the example of two German case-control studies on lung cancer and residential radon exposure. We give intuitive measures of error size and present evidence on both the error size and the multiplicative structure of the error from three data sets with repeated measurements of radon concentration. We conclude that modern exposure assessment should not only aim to be as accurate and precise as possible, but should also provide a model of the remaining measurement errors with clear differentiation of classical and Berkson components.
Sonic Anemometer Vertical Wind Speed Measurement Errors
NASA Astrophysics Data System (ADS)
Kochendorfer, J.; Horst, T. W.; Frank, J. M.; Massman, W. J.; Meyers, T. P.
2014-12-01
In eddy covariance studies, errors in the measured vertical wind speed cause errors of a similar magnitude in the vertical fluxes of energy and mass. Several recent studies on the accuracy of sonic anemometer measurements indicate that non-orthogonal sonic anemometers used in eddy covariance studies underestimate the vertical wind speed. It has been suggested that this underestimation is caused by flow distortion from the interference of the structure of the anemometer itself on the flow. When oriented ideally with respect to the horizontal wind direction, orthogonal sonic anemometers that measure the vertical wind speed with a single vertically-oriented acoustic path may measure the vertical wind speed more accurately in typical surface-layer conditions. For non-orthogonal sonic anemometers, Horst et al. (2014) proposed that transducer shadowing may be a dominant factor in sonic flow distortion. As the ratio of sonic transducer diameter to path length and the zenith angle of the three transducer paths decrease, the effects of transducer shadowing on measurements of vertical velocity will decrease. An overview of this research and some of the methods available to correct historical data will be presented.
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Errors Associated With Measurements from Imaging Probes
NASA Astrophysics Data System (ADS)
Heymsfield, A.; Bansemer, A.
2015-12-01
Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.
Systematic Errors in measurement of b1
NASA Astrophysics Data System (ADS)
Wood, S. A.
2014-10-01
A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b1 structure function at Jefferson Lab.
Intra-rater and inter-rater reliabilities of real-time acceleration gait analysis system.
Osaka, Hiroshi; Shinkoda, Koichi; Watanabe, Susumu; Fujita, Daisuke; Kobara, Kenichi; Yoshimura, Yosuke; Ito, Tomotaka
2016-01-01
The purposes of this study were to construct a real-time acceleration gait analysis system equipped with software to analyse real-time trunk acceleration during walking and to examine the intra-rater and inter-rater reliabilities of the this system. This system has been comprised of an accelerometer, an acceleration amplifier, a transmitter, two foot switches, a receiver and a personal computer installed with the real-time acceleration analysis software. The acceleration signals received were analysed using the real-time acceleration analysis software, and gait parameters were calculated. The subjects were 20 healthy individuals and two raters. The intra-rater and inter-rater reliabilities of the measurement results obtained from this system were examined by performing intraclass correlation coefficients (ICC) and Bland-Altman analysis. The intra-rater and inter-rater ICCs ranged from 0.61 to 0.92 in any gait parameters. In the Bland-Altman analysis, neither fixed nor proportional bias was found in any of the gait parameters. From the ICC and Bland-Altman analysis results, the gait measurement using this system clearly demonstrates that the intra-rater and inter-rater measurements had good reproducibility. Owing to this system, we can improve the clinical efficiency of gait analysis and gait training for physiotherapy. Implication for Rehabilitation This study focused on the advantage of a gait analysis method using an accelerometer and constructed a gait analysis system that calculates real-time gait parameters from trunk acceleration measurements during walking. The gait analysis using this system has good intra-rater and inter-rater reliabilities, and using this system can improve the clinical efficiency of gait analysis and gait training.
Detection system for ocular refractive error measurement.
Ventura, L; de Faria e Sousa, S J; de Castro, J C
1998-05-01
An automatic and objective system for measuring ocular refractive errors (myopia, hyperopia and astigmatism) was developed. The system consists of projecting a light target (a ring), using a diode laser (lambda = 850 nm), at the fundus of the patient's eye. The light beams scattered from the retina are submitted to an optical system and are analysed with regard to their vergence by a CCD detector (matrix). This system uses the same basic principle for the projection of beams into the tested eye as some commercial refractors, but it is innovative regarding the ring-shaped measuring target for the projection system and the detection system where a matrix detector provides a wider range of measurement and a less complex system for the optical alignment. Also a dedicated electronic circuit was not necessary for treating the electronic signals from the detector (as the usual refractors do); instead a commercial frame grabber was used and software based on the heuristic search technique was developed. All the guiding equations that describe the system as well as the image processing procedure are presented in detail. Measurements in model eyes and in human eyes are in good agreement with retinoscopic measurements and they are also as precise as these kinds of measurements require (0.125D and 5 degrees).
[Therapeutic errors and dose measuring devices].
García-Tornel, S; Torrent, M L; Sentís, J; Estella, G; Estruch, M A
1982-06-01
In order to investigate the possibilities of therapeutical error in syrups administration, authors have measured the capacity of 158 home spoons (x +/- SD). They classified spoons in four groups: group I (table spoons), 49 units (11.65 +/- 2.10 cc); group II (tea spoons), 41 units (4.70+/-1.04 cc); group III (coffee spoons), 41 units (2.60 +/- 0.59 cc), and group IV (miscellaneous), 27 units. They have compared the first three groups with theoreticals values of 15, 5 and 2.5 cc, respectively, ensuring, in the first group, significant statistical differences. In this way, they analyzed information that paediatricians receive from "vademecums", which they usually consult and have studied two points: If syrup has a meter or not, and if it indicates drug concentration or not. Only a 18% of the syrups have a meter and about 88% of the drugs indicate their concentration (mg/cc). They conclude that to prevent errors of dosage, the pharmacological industry must include meters in their products. If they haven't the safest thing is to use syringes.
ERIC Educational Resources Information Center
Schuster, Christof
2004-01-01
This article presents a formula for weighted kappa in terms of rater means, rater variances, and the rater covariance that is particularly helpful in emphasizing that weighted kappa is an absolute agreement measure in the sense that it is sensitive to differences in rater's marginal distributions. Specifically, rater mean differences will decrease…
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Observer error in blood pressure measurement.
Neufeld, P D; Johnson, D L
1986-01-01
This paper describes an experiment undertaken to determine observer error in measuring blood pressure by the auscultatory method. A microcomputer was used to display a simulated mercury manometer and play back tape-recorded Korotkoff sounds synchronized with the fall of the mercury column. Each observer's readings were entered into the computer, which displayed a histogram of all readings taken up to that point and thus showed the variation among observers. The procedure, which could easily be adapted for use in teaching, was used to test 311 observers drawn from physicians, nurses, medical students, nursing students and others at nine health care institutions in Ottawa. The results showed a strong bias for even-digit readings and standard deviations of roughly 5 to 6 mm Hg. The standard deviation for the systolic readings was somewhat smaller for the physicians as a group than for the nurses (3.5 v. 5.9 mm Hg). However, the standard deviations for the diastolic readings were roughly equal for these two groups (approximately 5.5 mm Hg). Images Fig. 1 PMID:3756693
Monitoring the Random Errors of Nuclear Material Measurements
,
1980-06-01
Monitoring and controlling random errors is an important function of a measurement control program. This report describes the principal sources of random error in the common nuclear material measurement processes and the most important elements of a program for monitoring, evaluating and controlling the random error standard deviations of these processes.
Body shape preferences: associations with rater body shape and sociosexuality.
Price, Michael E; Pound, Nicholas; Dunn, James; Hopkins, Sian; Kang, Jinsheng
2013-01-01
There is accumulating evidence of condition-dependent mate choice in many species, that is, individual preferences varying in strength according to the condition of the chooser. In humans, for example, people with more attractive faces/bodies, and who are higher in sociosexuality, exhibit stronger preferences for attractive traits in opposite-sex faces/bodies. However, previous studies have tended to use only relatively simple, isolated measures of rater attractiveness. Here we use 3D body scanning technology to examine associations between strength of rater preferences for attractive traits in opposite-sex bodies, and raters' body shape, self-perceived attractiveness, and sociosexuality. For 118 raters and 80 stimuli models, we used a 3D scanner to extract body measurements associated with attractiveness (male waist-chest ratio [WCR], female waist-hip ratio [WHR], and volume-height index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality. As expected, WHR and VHI were important predictors of female body attractiveness, while WCR and VHI were important predictors of male body attractiveness. Results indicated that male rater sociosexuality scores were positively associated with strength of preference for attractive (low) VHI and attractive (low) WHR in female bodies. Moreover, male rater self-perceived attractiveness was positively associated with strength of preference for low VHI in female bodies. The only evidence of condition-dependent preferences in females was a positive association between attractive VHI in female raters and preferences for attractive (low) WCR in male bodies. No other significant associations were observed in either sex between aspects of rater body shape and strength of preferences for attractive opposite-sex body traits. These results suggest that among male raters, rater self-perceived attractiveness and sociosexuality are important predictors of preference strength for attractive opposite
Measuring errors and adverse events in health care.
Thomas, Eric J; Petersen, Laura A
2003-01-01
In this paper, we identify 8 methods used to measure errors and adverse events in health care and discuss their strengths and weaknesses. We focus on the reliability and validity of each, as well as the ability to detect latent errors (or system errors) versus active errors and adverse events. We propose a general framework to help health care providers, researchers, and administrators choose the most appropriate methods to meet their patient safety measurement goals.
Rapid mapping of volumetric machine errors using distance measurements
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
Oh, Jung Sook; Chae, Moungae; Jung, Jae Yeon; Bae, Sung Suk
2007-01-01
We tried to develop itemized evaluation criteria and a clinical rater qualification system through rating training of inter-rater consistency for experienced clinical dental hygienists and dental hygiene clinical educators. A total of 15 clinical dental hygienists with 1-year careers participated as clinical examination candidates, while 5 dental hygienists with 3-year educations and clinical careers or longer participated as clinical raters. They all took the clinical examination as examinees. The results were compared, and the consistency of competence was measured. The comparison of clinical competence between candidates and clinical raters showed that the candidate group's mean clinical competence ranged from 2.96 to 3.55 on a 5-point system in a total of 3 instruments (Probe, Explorer, Curet), while the clinical rater group's mean clinical competence ranged from 4.05 to 4.29. There was a higher inter-rater consistency after education of raters in the following 4 items: Probe, Explorer, Curet, and insertion on distal surface. The mean score distribution of clinical raters ranged from 75% to 100%, which was more uniform in the competence to detect an artificial calculus than that of candidates (25% to 100%). According to the above results, there was a necessity in the operating clinical rater qualification system for comprehensive dental hygiene clinicians. Furthermore, in order to execute the clinical rater qualification system, it will be necessary to keep conducting a series of studies on educational content, time, frequency, and educator level. PMID:19224006
MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.
CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.
2004-07-05
The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest. © 2011, The International Biometric Society.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Reverse attenuation in interaction terms due to covariate measurement error.
Muff, Stefanie; Keller, Lukas F
2015-11-01
Covariate measurement error may cause biases in parameters of regression coefficients in generalized linear models. The influence of measurement error on interaction parameters has, however, only rarely been investigated in depth, and if so, attenuation effects were reported. In this paper, we show that also reverse attenuation of interaction effects may emerge, namely when heteroscedastic measurement error or sampling variances of a mismeasured covariate are present, which are not unrealistic scenarios in practice. Theoretical findings are illustrated with simulations. A Bayesian approach employing integrated nested Laplace approximations is suggested to model the heteroscedastic measurement error and covariate variances, and an application shows that the method is able to reveal approximately correct parameter estimates.
BAHRAMI, Fariba; NOORIZADEH DEHKORDI, Shohreh; DADGOO, Mehdi
2017-01-01
Objective We aimed to investigation the intra-rater and inter-raters reliability of the 10 meter walk test (10 MWT) in adults with spastic cerebral palsy (CP). Materials & Methods Thirty ambulatory adults with spastic CP in the summer of 2014 participated (19 men, 11 women; mean age 28 ± 7 yr, range 18- 46 yr). Individuals were non-randomly selected by convenient sampling from the Ra’ad Rehabilitation Goodwill Complex in Tehran, Iran. They had GMFCS levels below IV (I, II, and III). Retest interval for inter-raters study lasted a week. During the tests, participants walked with their maximum speed. Intraclass correlation coefficients (ICC) estimated reliability. Results The 10 MWT ICC for intra-rater was 0.98 (95% confidence interval (CI) 0.96-0.99) for participants, and >0.89 in GMFCS subgroups (95% confidence interval (CI) lower bound>0.67). The 10 MWT inter-raters’ ICC was 0.998 (95% confidence interval (CI) 0/996-0/999), and >0.993 in GMFCS subgroups (95% confidence interval (CI) lower bound>0.977). Standard error of the measurement (SEM) values for both studies was small (0.02< SEM< 0.07). Conclusion Excellent intra-rater and inter-raters reliability of the 10 MWT in adults with CP, especially in the moderate motor impairments (GMFCS level III), indicates that this tool can be used in clinics to assess the results of interventions. PMID:28277557
Error analysis for a laser differential confocal radius measurement system.
Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu
2015-02-10
In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13 μm+0.9 ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.
Assessment of relative error sources in IR DIAL measurement accuracy
NASA Technical Reports Server (NTRS)
Menyuk, N.; Killinger, D. K.
1983-01-01
An assessment is made of the role the various error sources play in limiting the accuracy of infrared differential absorption lidar measurements used for the remote sensing of atmospheric species. An overview is presented of the relative contribution of each error source including the inadequate knowledge of the absorption coefficient, differential spectral reflectance, and background interference as well as measurement errors arising from signal fluctuations.
Thinking Scientifically: Understanding Measurement and Errors
ERIC Educational Resources Information Center
Alagumalai, Sivakumar
2015-01-01
Thinking scientifically consists of systematic observation, experiment, measurement, and the testing and modification of research questions. In effect, science is about measurement and the understanding of causation. Measurement is an integral part of science and engineering, and has pertinent implications for the human sciences. No measurement is…
Thinking Scientifically: Understanding Measurement and Errors
ERIC Educational Resources Information Center
Alagumalai, Sivakumar
2015-01-01
Thinking scientifically consists of systematic observation, experiment, measurement, and the testing and modification of research questions. In effect, science is about measurement and the understanding of causation. Measurement is an integral part of science and engineering, and has pertinent implications for the human sciences. No measurement is…
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Measurement of six degree motion error of linear stage
NASA Astrophysics Data System (ADS)
Furutani, Ryoshu
2017-06-01
In this paper, we propose measurement system of the six degree of motion errors which is based on distance measurement by the laser interferometer. The system has six parallel laser beams and six corner cube mirrors on the linear stage, which reflect the corresponding laser beams. The error of axial direction is measured with the ordinary distance measurement method by laser interferometer. The vertical errors to the axial direction and the roll errors are measured by tilted beams using the wedge prism. The pitch and yaw errors are measured by the difference between distance of two corner cube mirrors. The experimental layout of the corner cube mirrors and the other optical devices are shown. As a result, the resolution of 66.5 nm for error of the axial direction, the resolution of 383 nm for the vertical errors to the axial direction, the resolution of 1.23 arc-sec for the roll error and the resolution of 0.429 arc-sec for the pitch and yaw errors are obtained in this system.
Deconvolution Estimation in Measurement Error Models: The R Package decon
Wang, Xiao-Feng; Wang, Bin
2011-01-01
Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139
Intraobserver error associated with measurements of the hand.
Weinberg, Seth M; Scott, Nicole M; Neiswanger, Katherine; Marazita, Mary L
2005-01-01
Measurements of the hand are common in studies that use anthropometric data. However, despite widespread usage, relatively few studies have formally assessed the degree of measurement error associated with standard measurements of the hand. This is significant because high amounts of measurement error can invalidate statistical results. In this paper, intraobserver precision estimates for measures of total hand length and total 3rd-digit length were evaluated from repeated measures on 90 subjects (180 separate hands and fingers). From this replicate data, three precision estimates were calculated: the technical error of measurement (TEM), the relative technical error of measurement (rTEM), and the coefficient of reliability (R). For both measurements, all three estimates yielded a very high degree of precision (TEM < 2 mm, rTEM < 1%, and R > or = 0.95). These results suggest that both total hand length and 3rd-digit length are sufficiently precise for anthropometric research applications.
Pressure Change Measurement Leak Testing Errors
Pryor, Jeff M; Walker, William C
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
The Impact of Covariate Measurement Error on Risk Prediction
Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna
2015-01-01
In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses’ Health Study. PMID:25865315
Measures of Linguistic Accuracy in Second Language Writing Research.
ERIC Educational Resources Information Center
Polio, Charlene G.
1997-01-01
Investigates the reliability of measures of linguistic accuracy in second language writing. The study uses a holistic scale, error-free T-units, and an error classification system on the essays of English-as-a-Second-Language students and discusses why disagreements arise within a rater and between raters. (24 references) (Author/CK)
Measures of Linguistic Accuracy in Second Language Writing Research.
ERIC Educational Resources Information Center
Polio, Charlene G.
1997-01-01
Investigates the reliability of measures of linguistic accuracy in second language writing. The study uses a holistic scale, error-free T-units, and an error classification system on the essays of English-as-a-Second-Language students and discusses why disagreements arise within a rater and between raters. (24 references) (Author/CK)
Temperature error in radiation thermometry caused by emissivity and reflectance measurement error.
Corwin, R R; Rodenburghii, A
1994-04-01
A general expression for the temperature error caused by emissivity uncertainty is developed, and it is concluded that lower-wavelength systems provide significantly less temperature error. A technique to measure the normal emissivity is proposed that uses a normally incident light beam and an aperture to collect a portion of the energy reflected from the surface and to measure essentially both the specular component and the biangular reflectance at the edge of the aperture. The theoretical results show that the aperture size need not be substantial to provide reasonably low temperature errors for a broad class of materials and surface reflectance conditions.
Brooks, Michael C; Wise, William R
2005-09-15
During moment-based analyses of partitioning tracer tests, systematic errors in volume and concentration measurements propagate to yield errors in the saturation and volume estimates for nonaqueous phase liquid (NAPL). Derived expressions could be applied to help practitioners bracket their estimates of NAPL saturation and volume obtained from such tests. In practice, many of these effects may be overshadowed by other complications experienced in the field. Errors are propagated for systematic constant (offset) volume, proportional volume, and constant (offset) concentration errors. Previous efforts to quantify the impact of these errors were predicated upon the specific assumption that nonpartitioning and partitioning masses were equal. The current work relaxes that assumption and is therefore more general in scope. Through the use of nondimensional concentration, systematic proportional concentration errors do not affect the accuracy of the method. Specific consideration needs to be given to accurate flow measurements and minimizing baseline concentration errors when performing partitioning tracer tests in order to prevent the propagation of systematic errors.
A Comparison of Assessment Methods and Raters in Product Creativity
ERIC Educational Resources Information Center
Lu, Chia-Chen; Luh, Ding-Bang
2012-01-01
Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…
Twins and the Study of Rater (Dis)agreement
ERIC Educational Resources Information Center
Bartels, Meike; Boomsma, Dorret I.; Hudziak, James J.; van Beijsterveldt, Toos C. E. M.; van den Oord, Edwin J. C. G.
2007-01-01
Genetically informative data can be used to address fundamental questions concerning the measurement of behavior in children. The authors illustrate this with longitudinal multiple-rater data on internalizing problems in twins. Valid information on the behavior of a child is obtained for behavior that multiple raters agree upon and for…
A Comparison of Assessment Methods and Raters in Product Creativity
ERIC Educational Resources Information Center
Lu, Chia-Chen; Luh, Ding-Bang
2012-01-01
Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…
Unit of Measurement Used and Parent Medication Dosing Errors
Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.
2014-01-01
BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742
Unit of measurement used and parent medication dosing errors.
Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L
2014-08-01
Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Cui, Cunxing; Feng, Qibo; Zhang, Bin
2015-04-10
The straightness measurement systematic errors induced by error crosstalk, fabrication and installation deviation of optical element, measurement sensitivity variation, and the Abbe error in six degree-of-freedom simultaneous measurement system are analyzed in detail in this paper. Models for compensating these systematic errors were established and verified through a series of comparison experiments with the Automated Precision Inc. (API) 5D measurement system, and the experimental results showed that the maximum deviation in straightness error measurement could be reduced from 6.4 to 0.9 μm in the x-direction, and 8.8 to 0.8 μm in the y-direction, after the compensation.
ERIC Educational Resources Information Center
Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija
2012-01-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…
ERIC Educational Resources Information Center
Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija
2012-01-01
The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…
Kreiter, Clarence D.; Wilson, Adam B.; Humbert, Aloysius J.; Wade, Patricia A.
2016-01-01
Background When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. Method During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. Results The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. Conclusions Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptor's ratings be used to calculate the student's overall mean performance score. PMID:26925540
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.
Orloff, K L; Snyder, P K
1982-01-15
Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.
Laser Doppler anemometer measurements using nonorthogonal velocity components - Error estimates
NASA Technical Reports Server (NTRS)
Orloff, K. L.; Snyder, P. K.
1982-01-01
Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.
Hypothesis testing in an errors-in-variables model with heteroscedastic measurement errors.
de Castro, Mário; Galea, Manuel; Bolfarine, Heleno
2008-11-10
In many epidemiological studies it is common to resort to regression models relating incidence of a disease and its risk factors. The main goal of this paper is to consider inference on such models with error-prone observations and variances of the measurement errors changing across observations. We suppose that the observations follow a bivariate normal distribution and the measurement errors are normally distributed. Aggregate data allow the estimation of the error variances. Maximum likelihood estimates are computed numerically via the EM algorithm. Consistent estimation of the asymptotic variance of the maximum likelihood estimators is also discussed. Test statistics are proposed for testing hypotheses of interest. Further, we implement a simple graphical device that enables an assessment of the model's goodness of fit. Results of simulations concerning the properties of the test statistics are reported. The approach is illustrated with data from the WHO MONICA Project on cardiovascular disease.
Non-Gaussian Error Distributions of LMC Distance Moduli Measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Ratra, Bharat
2015-12-01
We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.
Backward-gazing method for measuring solar concentrators shape errors.
Coquand, Mathieu; Henault, François; Caliot, Cyril
2017-03-01
This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Reducing measurement errors during functional capacity tests in elders.
da Silva, Mariane Eichendorf; Orssatto, Lucas Bet da Rosa; Bezerra, Ewertton de Souza; Silva, Diego Augusto Santos; Moura, Bruno Monteiro de; Diefenthaeler, Fernando; Freitas, Cíntia de la Rocha
2017-08-23
Accuracy is essential to the validity of functional capacity measurements. To evaluate the error of measurement of functional capacity tests for elders and suggest the use of the technical error of measurement and credibility coefficient. Twenty elders (65.8 ± 4.5 years) completed six functional capacity tests that were simultaneously filmed and timed by four evaluators by means of a chronometer. A fifth evaluator timed the tests by analyzing the videos (reference data). The means of most evaluators for most tests were different from the reference (p < 0.05), except for two evaluators for two different tests. There were different technical error of measurement between tests and evaluators. The Bland-Altman test showed difference in the concordance of the results between methods. Short duration tests showed higher technical error of measurement than longer tests. In summary, tests timed by a chronometer underestimate the real results of the functional capacity. Difference between evaluators' reaction time and perception to determine the start and the end of the tests would justify the errors of measurement. Calculation of the technical error of measurement or the use of the camera can increase data validity.
Temperature measurement error simulation of the pure rotational Raman lidar
NASA Astrophysics Data System (ADS)
Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang
2015-11-01
Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Identification and Minimization of Errors in Doppler Global Velocimetry Measurements
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.
2000-01-01
A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.
Space acceleration measurement system triaxial sensor head error budget
NASA Technical Reports Server (NTRS)
Thomas, John E.; Peters, Rex B.; Finley, Brian D.
1992-01-01
The objective of the Space Acceleration Measurement System (SAMS) is to measure and record the microgravity environment for a given experiment aboard the Space Shuttle. To accomplish this, SAMS uses remote triaxial sensor heads (TSH) that can be mounted directly on or near an experiment. The errors of the TSH are reduced by calibrating it before and after each flight. The associated error budget for the calibration procedure is discussed here.
Measuring Pixel-Position Errors On CCD Imagers
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Shao, Mike; Gursel, Yekta; Yu, Jeffrey
1995-01-01
Deviations of positions of pixels on charge-coupled-device (CCD) image detector from nominal rectangular grid pattern measured by method in which coherent-light interference fringes used as reference pattern. Conceived for use in determining pixel-position errors in astrometric cameras flown aboard spacecraft. Also applied to determination of similar errors in (and calibration of) terrestrial CCD cameras used as position sensors; for example, position-measuring cameras that are parts of robotic systems.
Aerial measurement error with a dot planimeter: Some experimental estimates
NASA Technical Reports Server (NTRS)
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio
2017-01-01
To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurement error caused by spatial misalignment in environmental epidemiology.
Gryparis, Alexandros; Paciorek, Christopher J; Zeka, Ariana; Schwartz, Joel; Coull, Brent A
2009-04-01
In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area.
Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior
Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor
2015-01-01
Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315
INTERVAL SAMPLING METHODS AND MEASUREMENT ERROR: A COMPUTER SIMULATION
Wirth, Oliver; Slaven, James; Taylor, Matthew A.
2015-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method’s inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Error-tradeoff and error-disturbance relations for incompatible quantum measurements
Branciard, Cyril
2013-01-01
Heisenberg’s uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg’s first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg’s intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344
Error-tradeoff and error-disturbance relations for incompatible quantum measurements.
Branciard, Cyril
2013-04-23
Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.
Errors Associated with the Direct Measurement of Radionuclides in Wounds
Hickman, D P
2006-03-02
Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and
Efficient measurement of quantum gate error by interleaved randomized benchmarking.
Magesan, Easwar; Gambetta, Jay M; Johnson, B R; Ryan, Colm A; Chow, Jerry M; Merkel, Seth T; da Silva, Marcus P; Keefe, George A; Rothwell, Mary B; Ohki, Thomas A; Ketchen, Mark B; Steffen, M
2012-08-24
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates X(π/2) and Y(π/2). These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Filter induced errors in laser anemometer measurements using counter processors
NASA Technical Reports Server (NTRS)
Oberle, L. G.; Seasholtz, R. G.
1985-01-01
Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy
Gil-Pita, Roberto
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
Cheiloscopy: Lip Print Inter-rater Reliability.
Furnari, Winnie; Janal, Malvin N
2017-05-01
Lip print analysis, or cheiloscopy, has the potential to join fingerprints and retinal scans as an additional method to determine human identification. This preliminary study sought to determine agreement among 20 raters, forensic odontologists, using an often referenced system that categorizes lip prints into six classes related to the dominant pattern of vertical, horizontal, and intersecting lines. Lip prints were taken from 13 individuals, and raters categorized eight distinct regions of each print. In addition to ratings made while viewing the actual prints, the raters repeated the exercise using photographs of the lip prints. Multirater kappa, a chance-corrected measure of agreement, ranged between 0.15 for the actual prints and 0.25 for the photos, indicating only poor to fair levels of inter-rater reliability. While these results fail to support the use of lip prints for human identification, it is possible that more intensive training may yet produce adequate levels of reliability. © 2016 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Kowalik, Waldemar W.; Garncarz, Beata E.; Kasprzak, Henryk T.
This work contains results of computer simulation researches, which define requirements for measurement conditions, which should be fulfilled so that measurement results ensure allowable errors. They define: allowable measurement errors (interferogram's scanning) and conditions, which should fulfill computer programs, so that errors introduced by mathematical operations and computer are the smallest.
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Laser tracker error determination using a network measurement
NASA Astrophysics Data System (ADS)
Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim
2011-04-01
We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.
Effect of Wavefront Error on 10^-7 Contrast Measurements
Evans, J W; Sommargren, G; Macintosh, B; Severson, S; Dillon, D
2005-10-06
We have measured a contrast of 6.5 {center_dot} 10{sup -8} from 10-25{lambda}/D in visible light on the Extreme Adaptive Optics testbed using a shaped pupil for diffraction suppression. The testbed was designed with a minimal number of high-quality optics to ensure low wavefront error and uses a phase shifting diffraction interferometer for metrology. This level of contrast is within the regime needed for imaging young Jupiter-like planets, a primary application of high-contrast imaging. We have concluded that wavefront error, not pupil quality, is the limiting error source for improved contrast in our system.
Stronger error disturbance relations for incompatible quantum measurements
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Chiranjib; Shukla, Namrata; Pati, Arun Kumar
2016-03-01
We formulate a new error-disturbance relation, which is free from explicit dependence upon variances in observables. This error-disturbance relation shows improvement over the one provided by the Branciard inequality and the Ozawa inequality for some initial states and for a particular class of joint measurements under consideration. We also prove a modified form of Ozawa's error-disturbance relation. The latter relation provides a tighter bound compared to the Ozawa and the Branciard inequalities for a small number of states.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.
Beam induced vacuum measurement error in BEPC II
NASA Astrophysics Data System (ADS)
Huang, Tao; Xiao, Qiong; Peng, XiaoHua; Wang, HaiJing
2011-12-01
When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.
Accounting for measurement error: a critical but often overlooked process.
Harris, Edward F; Smith, Richard N
2009-12-01
Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.
Measurement Error Calibration in Mixed-Mode Sample Surveys
ERIC Educational Resources Information Center
Buelens, Bart; van den Brakel, Jan A.
2015-01-01
Mixed-mode surveys are known to be susceptible to mode-dependent selection and measurement effects, collectively referred to as mode effects. The use of different data collection modes within the same survey may reduce selectivity of the overall response but is characterized by measurement errors differing across modes. Inference in sample surveys…
Spatial regression with covariate measurement error: A semiparametric approach.
Huque, Md Hamidul; Bondell, Howard D; Carroll, Raymond J; Ryan, Louise M
2016-09-01
Spatial data have become increasingly common in epidemiology and public health research thanks to advances in GIS (Geographic Information Systems) technology. In health research, for example, it is common for epidemiologists to incorporate geographically indexed data into their studies. In practice, however, the spatially defined covariates are often measured with error. Naive estimators of regression coefficients are attenuated if measurement error is ignored. Moreover, the classical measurement error theory is inapplicable in the context of spatial modeling because of the presence of spatial correlation among the observations. We propose a semiparametric regression approach to obtain bias-corrected estimates of regression parameters and derive their large sample properties. We evaluate the performance of the proposed method through simulation studies and illustrate using data on Ischemic Heart Disease (IHD). Both simulation and practical application demonstrate that the proposed method can be effective in practice.
Error Evaluation of Methyl Bromide Aerodynamic Flux Measurements
Majewski, M.S.
1997-01-01
Methyl bromide volatilization fluxes were calculated for a tarped and a nontarped field using 2 and 4 hour sampling periods. These field measurements were averaged in 8, 12, and 24 hour increments to simulate longer sampling periods. The daily flux profiles were progressively smoothed and the cumulative volatility losses increased by 20 to 30% with each longer sampling period. Error associated with the original flux measurements was determined from linear regressions of measured wind speed and air concentration as a function of height, and averaged approximately 50%. The high errors resulted from long application times, which resulted in a nonuniform source strength; and variable tarp permeability, which is influenced by temperature, moisture, and thickness. The increase in cumulative volatilization losses that resulted from longer sampling periods were within the experimental error of the flux determination method.
Monte Carlo methods for nonparametric regression with heteroscedastic measurement error.
McIntyre, Julie; Johnson, Brent A; Rappaport, Stephen M
2017-09-15
Nonparametric regression is a fundamental problem in statistics but challenging when the independent variable is measured with error. Among the first approaches was an extension of deconvoluting kernel density estimators for homescedastic measurement error. The main contribution of this article is to propose a new simulation-based nonparametric regression estimator for the heteroscedastic measurement error case. Similar to some earlier proposals, our estimator is built on principles underlying deconvoluting kernel density estimators. However, the proposed estimation procedure uses Monte Carlo methods for estimating nonlinear functions of a normal mean, which is different than any previous estimator. We show that the estimator has desirable operating characteristics in both large and small samples and apply the method to a study of benzene exposure in Chinese factory workers. © 2017, The International Biometric Society.
Selected error sources in resistance measurements on superconductors
NASA Astrophysics Data System (ADS)
García-Vázquez, Valentín; Pérez-Amaro, Neftalí; Canizo-Cabrera, A.; Cumplido-Espíndola, B.; Martínez-Hernández, R.; Abarca-Ramírez, M. A.
2001-08-01
In order to investigate the causes that produce some of the unwanted effects observed in the resistance versus temperature profiles, a variety of sources of error for resistance measurements in superconductors, using a standard four-probe configuration, have been studied. A piece of superconducting Y1Ba2Cu3O7-x ceramic material has been used as the test sample, and the resulting effects in both accuracy and precision in its temperature dependent resistance are reported here. Studied measurement error sources include thermal emf's, temperature sweep rates, Faraday currents, electrical-contact failures at the sample's surface, thermal contractions at mechanically attached instrumental wires, external electromagnetic fields, and slow sampling rates during data acquisition. Details of the experimental setup and its measurement error function are also given.
Rater agreement in lung scintigraphy.
Christiansen, F; Andersson, T; Rydman, H; Qvarner, N; Måre, K
1996-09-01
The PIOPED criteria in their original and revised forms are today's standards in the interpretation of ventilation-perfusion scintigraphy. When the PIOPED criteria are used by experienced raters with training in consensus interpretation, the agreement rates have been demonstrated to be excellent. Our purpose was to investigate the rates of agreement between 2 experienced raters from different hospital who had no training in consensus interpretation. The 2 raters investigated a population of 195 patients. This group included 72 patients from a previous study who had an intermediate probability of pulmonary embolism and who had also been examined by pulmonary angiography. The results demonstrated moderate agreement rates with a kappa value of 0.54 (0.45-0.63 in a 95% confidence interval), which is similar to the kappa value of the PIOPED study but significantly lower than the kappa values of agreement rates among consensus-trained raters. There was a low consistency in the intermediate probability category, with a proportional agreement rate of 0.39 between the experienced raters. The moderate agreement rates between raters from different hospitals make it difficult to compare study populations of a certain scintigraphic category in different hospitals. Further investigations are mandatory for accurate diagnosis when the scintigrams are in the category of intermediate probability of pulmonary embolism.
Analysis and improvement of gas turbine blade temperature measurement error
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui
2015-10-01
Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.
Error-disturbance uncertainty relations in neutron spin measurements
NASA Astrophysics Data System (ADS)
Sponar, Stephan
2016-05-01
Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.
A rater training protocol to assess team performance.
Eppich, Walter; Nannicelli, Anna P; Seivert, Nicholas P; Sohn, Min-Woong; Rozenfeld, Ranna; Woods, Donna M; Holl, Jane L
2015-01-01
Simulation-based methodologies are increasingly used to assess teamwork and communication skills and provide team training. Formative feedback regarding team performance is an essential component. While effective use of simulation for assessment or training requires accurate rating of team performance, examples of rater-training programs in health care are scarce. We describe our rater training program and report interrater reliability during phases of training and independent rating. We selected an assessment tool shown to yield valid and reliable results and developed a rater training protocol with an accompanying rater training handbook. The rater training program was modeled after previously described high-stakes assessments in the setting of 3 facilitated training sessions. Adjacent agreement was used to measure interrater reliability between raters. Nine raters with a background in health care and/or patient safety evaluated team performance of 42 in-situ simulations using post-hoc video review. Adjacent agreement increased from the second training session (83.6%) to the third training session (85.6%) when evaluating the same video segments. Adjacent agreement for the rating of overall team performance was 78.3%, which was added for the third training session. Adjacent agreement was 97% 4 weeks posttraining and 90.6% at the end of independent rating of all simulation videos. Rater training is an important element in team performance assessment, and providing examples of rater training programs is essential. Articulating key rating anchors promotes adequate interrater reliability. In addition, using adjacent agreement as a measure allows differentiation between high- and low-performing teams on video review. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Error compensation research on the focal plane attitude measurement instrument
NASA Astrophysics Data System (ADS)
Zhou, Hongfei; Zhang, Feifan; Zhai, Chao; Zhou, Zengxiang; Liu, Zhigang; Wang, Jianping
2016-07-01
The surface accuracy of astronomical telescope focal plate is a key indicator to precision stellar observation. Combined with the six DOF parallel focal plane attitude measurement instrument that had been already designed, space attitude error compensation of the attitude measurement instrument for the focal plane was studied in order to measure the deformation and surface shape of the focal plane in different space attitude accurately.
Non-Gaussian error distribution of 7Li abundance measurements
NASA Astrophysics Data System (ADS)
Crandall, Sara; Houston, Stephen; Ratra, Bharat
2015-07-01
We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.
Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware
NASA Technical Reports Server (NTRS)
Winnitoy, Susan
2012-01-01
measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.
Optimal measurement strategies for effective suppression of drift errors
Yashchuk, Valeriy V.
2009-04-16
Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.
The effect of measurement error on surveillance metrics
Weaver, Brian Phillip; Hamada, Michael S.
2012-04-24
The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.
Yohay Carmel; Curtis Flather; Denis Dean
2006-01-01
This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations
ERIC Educational Resources Information Center
Moses, Tim; Kim, YoungKoung
2017-01-01
The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…
Three Approximations of Standard Error of Measurement: An Empirical Approach.
ERIC Educational Resources Information Center
Garvin, Alfred D.
Three successively simpler formulas for approximating the standard error of measurement were derived by applying successively more simplifying assumptions to the standard formula based on the standard deviation and the Kuder-Richardson formula 20 estimate of reliability. The accuracy of each of these three formulas, with respect to the standard…
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Repeated measurement sampling in genetic association analysis with genotyping errors.
Lai, Renzhen; Zhang, Hong; Yang, Yaning
2007-02-01
Genotype misclassification occurs frequently in human genetic association studies. When cases and controls are subject to the same misclassification model, Pearson's chi-square test has the correct type I error but may lose power. Most current methods adjusting for genotyping errors assume that the misclassification model is known a priori or can be assessed by a gold standard instrument. But in practical applications, the misclassification probabilities may not be completely known or the gold standard method can be too costly to be available. The repeated measurement design provides an alternative approach for identifying misclassification probabilities. With this design, a proportion of the subjects are measured repeatedly (five or more repeats) for the genotypes when the error model is completely unknown. We investigate the applications of the repeated measurement method in genetic association analysis. Cost-effectiveness study shows that if the phenotyping-to-genotyping cost ratio or the misclassification rates are relatively large, the repeat sampling can gain power over the regular case-control design. We also show that the power gain is not sensitive to the genetic model, genetic relative risk and the population high-risk allele frequency, all of which are typically important ingredients in association studies. An important implication of this result is that whatever the genetic factors are, the repeated measurement method can be applied if the genotyping errors must be accounted for or the phenotyping cost is high.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Three Approximations of Standard Error of Measurement: An Empirical Approach.
ERIC Educational Resources Information Center
Garvin, Alfred D.
Three successively simpler formulas for approximating the standard error of measurement were derived by applying successively more simplifying assumptions to the standard formula based on the standard deviation and the Kuder-Richardson formula 20 estimate of reliability. The accuracy of each of these three formulas, with respect to the standard…
Does a Rater's Professional Background Influence Communication Skills Assessment?
Artemiou, Elpida; Hecker, Kent G; Adams, Cindy L; Coe, Jason B
2015-01-01
There is increasing pressure in veterinary education to teach and assess communication skills, with the Objective Structured Clinical Examination (OSCE) being the most common assessment method. Previous research reveals that raters are a large source of variance in OSCEs. This study focused on examining the effect of raters' professional background as a source of variance when assessing students' communication skills. Twenty-three raters were categorized according to their professional background: clinical sciences (n=11), basic sciences (n=4), clinical communication (n=5), or hospital administrator/clinical skills technicians (n=3). Raters from each professional background were assigned to the same station and assessed the same students during two four-station OSCEs. Students were in year 2 of their pre-clinical program. Repeated-measures ANOVA results showed that OSCE scores awarded by the rater groups differed significantly: (F(matched_station_1) [2,91]=6.97, p=.002), (F(matched_station_2) [3,90]=13.95, p=.001), (F(matched_station_3) [3,90]=8.76, p=.001), and ((Fmatched_station_4) [2,91]=30.60, p=.001). A significant time effect between the two OSCEs was calculated for matched stations 1, 2, and 4, indicating improved student performances. Raters with a clinical communication skills background assigned scores that were significantly lower compared to the other rater groups. Analysis of written feedback provided by the clinical sciences raters showed that they were influenced by the students' clinical knowledge of the case and that they did not rely solely on the communication checklist items. This study shows that it is important to consider rater background both in recruitment and training programs for communication skills' assessment.
Error analysis on spinal motion measurement using skin mounted sensors.
Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond
2008-01-01
Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.
Comparing measurement errors for formants in synthetic and natural vowelsa)
Shadle, Christine H.; Nam, Hosung; Whalen, D. H.
2016-01-01
The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295–1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555
Chen, Benyong; Xu, Bin; Yan, Liping; Zhang, Enzheng; Liu, Yanna
2015-04-06
A laser straightness interferometer system with rotational error compensation and simultaneous measurement of six degrees of freedom error parameters is proposed. The optical configuration of the proposed system is designed and the mathematic model for simultaneously measuring six degrees of freedom parameters of the measured object including three rotational parameters of the yaw, pitch and roll errors and three linear parameters of the horizontal straightness error, vertical straightness error and straightness error's position is established. To address the influence of the rotational errors produced by the measuring reflector in laser straightness interferometer, the compensation method of the straightness error and its position is presented. An experimental setup was constructed and a series of experiments including separate comparison measurement of every parameter, compensation of straightness error and its position and simultaneous measurement of six degrees of freedom parameters of a precision linear stage were performed to demonstrate the feasibility of the proposed system. Experimental results show that the measurement data of the multiple degrees of freedom parameters obtained from the proposed system are in accordance with those obtained from the compared instruments and the presented compensation method can achieve good effect in eliminating the influence of rotational errors on the measurement of straightness error and its position.
Inter-tester Agreement in Refractive Error Measurements
Huang, Jiayan; Maguire, Maureen G.; Ciner, Elise; Kulp, Marjean T.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Ying, Gui-Shuang
2014-01-01
Purpose To determine the inter-tester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor (Retinomax) and the SureSight Vision Screener (SureSight). Methods Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3- to 5-years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Inter-tester agreement between lay and nurse screeners was assessed for sphere, cylinder and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean inter-tester difference (lay minus nurse) was compared between groups defined based on child’s age, cycloplegic refractive error, and the reading’s confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Inter-eye correlation was accounted for in all analyses. Results The mean inter-tester differences (95% limits of agreement) were −0.04 (−1.63, 1.54) Diopter (D) sphere, 0.00 (−0.52, 0.51) D cylinder, and −0.04 (1.65, 1.56) D SE for the Retinomax; and 0.05 (−1.48, 1.58) D sphere, 0.01 (−0.58, 0.60) D cylinder, and 0.06 (−1.45, 1.57) D SE for the SureSight. For either instrument, the mean inter-tester differences in sphere and SE did not differ by the child’s age, cycloplegic refractive error, or the reading’s confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading’s confidence number was below the manufacturer’s recommended value. Conclusions Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar inter-tester agreement in refractive error measurements independent of the child’s age. Significant refractive error and a reading with low confidence number were associated with worse inter
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Error Correction for Foot Clearance in Real-Time Measurement
NASA Astrophysics Data System (ADS)
Wahab, Y.; Bakar, N. A.; Mazalan, M.
2014-04-01
Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.
Effects of measurement errors on microwave antenna holography
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Rahmat-Samii, Yahya
1991-01-01
The effects of measurement errors appearing during the implementation of the microwave holographic technique are investigated in detail, and many representative results are presented based on computer simulations. The numerical results are tailored for cases applicable to the utilization of the holographic technique for the NASA's Deep Space Network antennas, although the methodology of analysis is applicable to any antenna. Many system measurement topics are presented and summarized.
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Error analysis for NMR polymer microstructure measurement without calibration standards.
Qiu, XiaoHua; Zhou, Zhe; Gobbi, Gian; Redwine, Oscar D
2009-10-15
We report an error analysis method for primary analytical methods in the absence of calibration standards. Quantitative (13)C NMR analysis of ethylene/1-octene (E/O) copolymers is given as an example. Because the method is based on a self-calibration scheme established by counting, it is a measure of accuracy rather than precision. We demonstrate it is self-consistent and neither underestimate nor excessively overestimate the experimental errors. We also show the method identified previously unknown systematic biases in a NMR instrument. The method can eliminate unnecessary data averaging to save valuable NMR resources. The accuracy estimate proposed is not unique to (13)C NMR spectroscopy of E/O but should be applicable to all other measurement systems where the accuracy of a subset of the measured responses can be established.
Confounding and exposure measurement error in air pollution epidemiology.
Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert
2012-06-01
Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.
Virtual Raters for Reproducible and Objective Assessments in Radiology
NASA Astrophysics Data System (ADS)
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-04-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.
Virtual Raters for Reproducible and Objective Assessments in Radiology
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-01-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics. PMID:27118379
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Reducing Errors by Use of Redundancy in Gravity Measurements
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A methodology for improving gravity-gradient measurement data exploits the constraints imposed upon the components of the gravity-gradient tensor by the conditions of integrability needed for reconstruction of the gravitational potential. These constraints are derived from the basic equation for the gravitational potential and from mathematical identities that apply to the gravitational potential and its partial derivatives with respect to spatial coordinates. Consider the gravitational potential in a Cartesian coordinate system {x1,x2,x3}. If one measures all the components of the gravity-gradient tensor at all points of interest within a region of space in which one seeks to characterize the gravitational field, one obtains redundant information. One could utilize the constraints to select a minimum (that is, nonredundant) set of measurements from which the gravitational potential could be reconstructed. Alternatively, one could exploit the redundancy to reduce errors from noisy measurements. A convenient example is that of the selection of a minimum set of measurements to characterize the gravitational field at n3 points (where n is an integer) in a cube. Without the benefit of such a selection, it would be necessary to make 9n3 measurements because the gravitygradient tensor has 9 components at each point. The problem of utilizing the redundancy to reduce errors in noisy measurements is an optimization problem: Given a set of noisy values of the components of the gravity-gradient tensor at the measurement points, one seeks a set of corrected values - a set that is optimum in that it minimizes some measure of error (e.g., the sum of squares of the differences between the corrected and noisy measurement values) while taking account of the fact that the constraints must apply to the exact values. The problem as thus posed leads to a vector equation that can be solved to obtain the corrected values.
Body Shape Preferences: Associations with Rater Body Shape and Sociosexuality
Price, Michael E.; Pound, Nicholas; Dunn, James; Hopkins, Sian; Kang, Jinsheng
2013-01-01
There is accumulating evidence of condition-dependent mate choice in many species, that is, individual preferences varying in strength according to the condition of the chooser. In humans, for example, people with more attractive faces/bodies, and who are higher in sociosexuality, exhibit stronger preferences for attractive traits in opposite-sex faces/bodies. However, previous studies have tended to use only relatively simple, isolated measures of rater attractiveness. Here we use 3D body scanning technology to examine associations between strength of rater preferences for attractive traits in opposite-sex bodies, and raters’ body shape, self-perceived attractiveness, and sociosexuality. For 118 raters and 80 stimuli models, we used a 3D scanner to extract body measurements associated with attractiveness (male waist-chest ratio [WCR], female waist-hip ratio [WHR], and volume-height index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality. As expected, WHR and VHI were important predictors of female body attractiveness, while WCR and VHI were important predictors of male body attractiveness. Results indicated that male rater sociosexuality scores were positively associated with strength of preference for attractive (low) VHI and attractive (low) WHR in female bodies. Moreover, male rater self-perceived attractiveness was positively associated with strength of preference for low VHI in female bodies. The only evidence of condition-dependent preferences in females was a positive association between attractive VHI in female raters and preferences for attractive (low) WCR in male bodies. No other significant associations were observed in either sex between aspects of rater body shape and strength of preferences for attractive opposite-sex body traits. These results suggest that among male raters, rater self-perceived attractiveness and sociosexuality are important predictors of preference strength for attractive opposite
Measurement Error of Dietary Self-Report in Intervention Trials
Natarajan, Loki; Pu, Minya; Fan, Juanjuan; Levine, Richard A.; Patterson, Ruth E.; Thomson, Cynthia A.; Rock, Cheryl L.; Pierce, John P.
2010-01-01
Dietary intervention trials aim to change dietary patterns of individuals. Participating in such trials could impact dietary self-report in divergent ways: Dietary counseling and training on portion-size estimation could improve self-report accuracy; participant burden could increase systematic error. Such intervention-associated biases could complicate interpretation of trial results. The authors investigated intervention-associated biases in reported total carotenoid intake using data on 3,088 breast cancer survivors recruited between 1995 and 2000 and followed through 2006 in the Women's Healthy Eating and Living Study, a randomized intervention trial. Longitudinal data from 2 self-report methods (24-hour recalls and food frequency questionnaires) and a plasma carotenoid biomarker were collected. A flexible measurement error model was postulated. Parameters were estimated in a Bayesian framework by using Markov chain Monte Carlo methods. Results indicated that the validity (i.e., correlation with “true” intake) of both self-report methods was significantly higher during follow-up for intervention versus nonintervention participants (4-year validity estimates: intervention = 0.57 for food frequency questionnaires and 0.58 for 24-hour recalls; nonintervention = 0.42 for food frequency questionnaires and 0.48 for 24-hour recalls). However, within- and between-instrument error correlations during follow-up were higher among intervention participants, indicating an increase in systematic error. Diet interventions can impact measurement errors of dietary self-report. Appropriate statistical methods should be applied to examine intervention-associated biases when interpreting results of diet trials. PMID:20720101
Error in total ozone measurements arising from aerosol attenuation
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
Bias and standard error for social reciprocity measurements.
Solanas, Antonio; Leiva, David; Salafranca, Lluís
2010-02-01
The directional consistency and skew-symmetry statistics have been proposed as global measure of social reciprocity. Although both measures can be useful for quantifying social reciprocity, researchers need to know whether these estimators are biased in order properly to assess descriptive results. That is, if estimators are biased, researchers should compare actual values with expected values under the specified null hypothesis. Furthermore, standard errors are needed to enable suitable assessment of discrepancies between actual and expected values. This paper aims to derive some exact and approximate expressions in order to obtain bias and standard error values for both estimators for round-robin designs, although the results can also be extended to other reciprocal designs.
Efficient measurement error correction with spatially misaligned data
Szpiro, Adam A.; Sheppard, Lianne; Lumley, Thomas
2011-01-01
Association studies in environmental statistics often involve exposure and outcome data that are misaligned in space. A common strategy is to employ a spatial model such as universal kriging to predict exposures at locations with outcome data and then estimate a regression parameter of interest using the predicted exposures. This results in measurement error because the predicted exposures do not correspond exactly to the true values. We characterize the measurement error by decomposing it into Berkson-like and classical-like components. One correction approach is the parametric bootstrap, which is effective but computationally intensive since it requires solving a nonlinear optimization problem for the exposure model parameters in each bootstrap sample. We propose a less computationally intensive alternative termed the “parameter bootstrap” that only requires solving one nonlinear optimization problem, and we also compare bootstrap methods to other recently proposed methods. We illustrate our methodology in simulations and with publicly available data from the Environmental Protection Agency. PMID:21252080
NASA Astrophysics Data System (ADS)
Liu, Wenwen; Tao, Tingting; Zeng, Hao
2016-10-01
Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.
EEDF probe measurements: differentiation methods, noise, and error
NASA Astrophysics Data System (ADS)
Dias, F. M.; Popov, Tsv
2007-04-01
An instrumentation approach to electron energy distribution function measurements using Langmuir probes is presented. The noise and error limitations of the most common differentiation techniques are analysed, it is shown how instrumental accuracy can be improved or how acquisition time can be drastically decreased, and a pertinent performance comparison of the harmonic vs. the numerical differentiation schemes is made. In addition, we stress the nasty effects of pink and of coherent noise, and we show how they can be minimised.
Poulos, Natalie S.; Pasch, Keryn E.
2015-01-01
Few studies of the food environment have collected primary data, and even fewer have reported reliability of the tool used. This study focused on the development of an innovative electronic data collection tool used to document outdoor food and beverage (FB) advertising and establishments near 43 middle and high schools in the Outdoor MEDIA Study. Tool development used GIS based mapping, an electronic data collection form on handheld devices, and an easily adaptable interface to efficiently collect primary data within the food environment. For the reliability study, two teams of data collectors documented all FB advertising and establishments within one half-mile of six middle schools. Inter-rater reliability was calculated overall and by advertisement or establishment category using percent agreement. A total of 824 advertisements (n=233), establishment advertisements (n=499), and establishments (n=92) were documented (range=8–229 per school). Overall inter-rater reliability of the developed tool ranged from 69–89% for advertisements and establishments. Results suggest that the developed tool is highly reliable and effective for documenting the outdoor FB environment. PMID:26022774
Poulos, Natalie S; Pasch, Keryn E
2015-07-01
Few studies of the food environment have collected primary data, and even fewer have reported reliability of the tool used. This study focused on the development of an innovative electronic data collection tool used to document outdoor food and beverage (FB) advertising and establishments near 43 middle and high schools in the Outdoor MEDIA Study. Tool development used GIS based mapping, an electronic data collection form on handheld devices, and an easily adaptable interface to efficiently collect primary data within the food environment. For the reliability study, two teams of data collectors documented all FB advertising and establishments within one half-mile of six middle schools. Inter-rater reliability was calculated overall and by advertisement or establishment category using percent agreement. A total of 824 advertisements (n=233), establishment advertisements (n=499), and establishments (n=92) were documented (range=8-229 per school). Overall inter-rater reliability of the developed tool ranged from 69-89% for advertisements and establishments. Results suggest that the developed tool is highly reliable and effective for documenting the outdoor FB environment.
Neuroretinal rim measurement error using PC-based stereo software.
Eikelboom, R H; Barry, C J; Jitskaia, L; Voon, A S; Yogesan, K
2000-06-01
The neuroretinal rims of a set of glaucoma patients were measured using digitized stereo photographs, to determine the reproducibility of computerized stereo measurements of the neuroretinal rim. Each rim was measured five times at 18 locations, with measurement error (ME) defined as the mean of standard deviations of each set of measurements. The following ME were determined: (i) inter-sessional variability (n = 27 right and 24 left eyes, at t1 and t2); (ii) inter-assessor variability (n = 9, 2 assessors); and (iii) variability after colour adjustment algorithms were applied (n = 15). The results were as follows: (i) inter-sessional variability was 3.41+/-1.08 for t1 and 3.22+/-0.84 for t2; (ii) there was a significant difference between the two assessors, although the ME was still low; and (iii) there was no significant differences between the ME of unadjusted and adjusted images. With a measurement error of up to 11% of rim width, these results show that lowcost rim measurements can be made using PC-based software.
Detecting correlated errors in state-preparation-and-measurement tomography
NASA Astrophysics Data System (ADS)
Jackson, Christopher; van Enk, S. J.
2015-10-01
Whereas in standard quantum-state tomography one estimates an unknown state by performing various measurements with known devices, and whereas in detector tomography one estimates the positive-operator-valued-measurement elements of a measurement device by subjecting to it various known states, we consider here the case of SPAM (state preparation and measurement) tomography where neither the states nor the measurement device are assumed known. For d -dimensional systems measured by d -outcome detectors, we find there are at most d2(d2-1 ) "gauge" parameters that can never be determined by any such experiment, irrespective of the number of unknown states and unknown devices. For the case d =2 we find gauge-invariant quantities that can be accessed directly experimentally and that can be used to detect and describe SPAM errors. In particular, we identify conditions whose violations detect the presence of correlations between SPAM errors. From the perspective of SPAM tomography, standard quantum-state tomography and detector tomography are protocols that fix the gauge parameters through the assumption that some set of fiducial measurements is known or that some set of fiducial states is known, respectively.
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Effects of vibration measurement error on remote sensing image restoration
NASA Astrophysics Data System (ADS)
Sun, Xuan; Wei, Zhang; Zhi, Xiyang
2016-10-01
Satellite vibrations would lead to image motion blur. Since the vibration isolators cannot fully suppress the influence of vibrations, image restoration methods are usually adopted, and the vibration characteristics of imaging system are usually required as algorithm inputs for better restoration results, making the vibration measurement error strongly connected to the final outcome. If the measurement error surpass a certain range, the restoration may not be implemented successfully. Therefore it is important to test the applicable scope of restoration algorithms and control the vibrations within the range, on the other hand, if the algorithm is robust, then the requirements for both vibration isolator and vibration detector can be lowered and thus less financial cost is needed. In this paper, vibration-induced degradation is first analyzed, based on which the effects of measurement error on image restoration are further analyzed. The vibration-induced degradation is simulated using high resolution satellite images and then the applicable working condition of typical restoration algorithms are tested with simulation experiments accordingly. The research carried out in this paper provides a valuable reference for future satellite design which plan to implement restoration algorithms.
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
Geometric error measurement of spiral bevel gears and data processing
NASA Astrophysics Data System (ADS)
Cao, Xue-mei; Cao, Qing-mei; Xu, Hao
2008-12-01
This paper calculates the theoretical tooth surface of spiral bevel gear and, using coordinate measuring machine, inspects the actual tooth surface, which provides an objective and quantitative method for inspecting the tooth surface of spiral bevel gears. For many reasons there are some deviations between the actual tooth surface and the theoretical tooth surface. Based on the differential geometry and space engagement theory, this paper deduces the analytical representation of theoretical tooth surface through the process of gear generation. After comparing the coordinates of the actual gear tooth surface and the theoretical tooth surface, a high-precision analysis graphics of tooth surface errors can be obtained by measuring date processing. A pair of aviation spiral bevel gears manufactured by Phoenix 800PG Grinding machine are inspected by Mahr measurement. The result of comparison of gear surface errors, inspected respectively by the method of this paper and by the Mahr's software, shows the consistency of the error distribution. The experiment verifies the validity and feasibility of the method presented in this paper.
Rater agreement of visual lameness assessment in horses during lungeing
Hammarberg, M.; Egenvall, A.; Pfau, T.
2015-01-01
Summary Reasons for performing study Lungeing is an important part of lameness examinations as the circular path may accentuate low‐grade lameness. Movement asymmetries related to the circular path, to compensatory movements and to pain make the lameness evaluation complex. Scientific studies have shown high inter‐rater variation when assessing lameness during straight line movement. Objectives The aim was to estimate inter‐ and intra‐rater agreement of equine veterinarians evaluating lameness from videos of sound and lame horses during lungeing and to investigate the influence of veterinarians’ experience and the objective degree of movement asymmetry on rater agreement. Study design Cross‐sectional observational study. Methods Video recordings and quantitative gait analysis with inertial sensors were performed in 23 riding horses of various breeds. The horses were examined at trot on a straight line and during lungeing on soft or hard surfaces in both directions. One video sequence was recorded per condition and the horses were classified as forelimb lame, hindlimb lame or sound from objective straight line symmetry measurements. Equine veterinarians (n = 86), including 43 with >5 years of orthopaedic experience, participated in a web‐based survey and were asked to identify the lamest limb on 60 videos, including 10 repeats. The agreements between (inter‐rater) and within (intra‐rater) veterinarians were analysed with κ statistics (Fleiss, Cohen). Results Inter‐rater agreement κ was 0.31 (0.38/0.25 for experienced/less experienced) and higher for forelimb (0.33) than for hindlimb lameness (0.11) or soundness (0.08) evaluation. Median intra‐rater agreement κ was 0.57. Conclusions Inter‐rater agreement was poor for less experienced raters, and for all raters when evaluating hindlimb lameness. Since identification of the lame limb/limbs is a prerequisite for successful diagnosis, treatment and recovery, the high inter‐rater variation
Motion measurement errors and autofocus in bistatic SAR.
Rigling, Brian D; Moses, Randolph L
2006-04-01
This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus.
Fairus, Fariza Zainudin; Joseph, Leonard Henry; Omar, Baharudin; Ahmad, Johan; Sulaiman, Riza
2016-01-01
Background The understanding of vertical ground reaction force (VGRF) during walking and half-squatting is necessary and commonly utilised during the rehabilitation period. The purpose of this study was to establish measurement reproducibility of VGRF that reports the minimal detectable changes (MDC) during walking and half-squatting activity among healthy male adults. Methods 14 male adults of average age, 24.88 (5.24) years old, were enlisted in this study. The VGRF was assessed using the force plates which were embedded into a customised walking platform. Participants were required to carry out three trials of gait and half-squat. Each participant completed the two measurements within a day, approximately four hours apart. Results Measurements of VGRF between sessions presented an excellent VGRF data for walking (ICC Left = 0.88, ICC Right = 0.89). High reliability of VGRF was also noted during the half-squat activity (ICC Left = 0.95, ICC Right = 0.90). The standard errors of measurement (SEM) of VGRF during the walking and half-squat activity are less than 8.35 Nm/kg and 4.67 Nm/kg for the gait and half-squat task respectively. Conclusion The equipment set-up and measurement procedure used to quantify VGRF during walking and half-squatting among healthy males displayed excellent reliability. Researcher should consider using this method to measure the VGRF during functional performance assessment. PMID:27547111
Error reduction techniques for measuring long synchrotron mirrors
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
More systematic errors in the measurement of power spectral density
NASA Astrophysics Data System (ADS)
Mack, Chris A.
2015-07-01
Power spectral density (PSD) analysis is an important part of understanding line-edge and linewidth roughness in lithography. But uncertainty in the measured PSD, both random and systematic, complicates interpretation. It is essential to understand and quantify the sources of the measured PSD's uncertainty and to develop mitigation strategies. Both analytical derivations and simulations of rough features are used to evaluate data window functions for reducing spectral leakage and to understand the impact of data detrending on biases in PSD, autocovariance function (ACF), and height-to-height covariance function measurement. A generalized Welch window was found to be best among the windows tested. Linear detrending for line-edge roughness measurement results in underestimation of the low-frequency PSD and errors in the ACF and height-to-height covariance function. Measuring multiple edges per scanning electron microscope image reduces this detrending bias.
Scattering error corrections for in situ absorption and attenuation measurements.
McKee, David; Piskozub, Jacek; Brown, Ian
2008-11-24
Monte Carlo simulations are used to establish a weighting function that describes the collection of angular scattering for the WETLabs AC-9 reflecting tube absorption meter. The equivalent weighting function for the AC-9 attenuation sensor is found to be well approximated by a binary step function with photons scattered between zero and the collection half-width angle contributing to the scattering error and photons scattered at larger angles making zero contribution. A new scattering error correction procedure is developed that accounts for scattering collection artifacts in both absorption and attenuation measurements. The new correction method does not assume zero absorption in the near infrared (NIR), does not assume a wavelength independent scattering phase function, but does require simultaneous measurements of spectrally matched particulate backscattering. The new method is based on an iterative approach that assumes that the scattering phase function can be adequately modeled from estimates of particulate backscattering ratio and Fournier-Forand phase functions. It is applied to sets of in situ data representative of clear ocean water, moderately turbid coastal water and highly turbid coastal water. Initial results suggest significantly higher levels of attenuation and absorption than those obtained using previously published scattering error correction procedures. Scattering signals from each correction procedure have similar magnitudes but significant differences in spectral distribution are observed.
2011-01-01
Background Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Methods Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Results Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. Conclusions For multiplicative error, both the amount and type of measurement error impact health effect estimates in air
Kohler, Friedbert; Connolly, Carol; Sakaria, Aroha; Stendara, Kimberly; Buhagiar, Mark; Mojaddidi, Mohammad
2013-09-01
The categories of the International Classification of Functioning , Disability and Health (ICF) could potentially be used as components of outcome measures. Literature demonstrating the psychometric properties of ICF categories is limited. Determine the agreement and reliability of ICF activities of daily living category scores and compare these to agreement and reliability of the Functional Independence Measure (FIM) item scores. Two investigators independently reviewed the clinical notes to score the ICF activities of daily living cate-gories, of 100 patients using ICF qualifiers with additional scoring guidelines. The percentage agreement, interrater and intrarater reliability were compared with the matched FIM items scored by a separate set of two investigators using the same methodology. Kappa Statistic was calculated using Med Calc. ICF interrater reliability as indicated by Kappa values ranging from 0.42 to 0.81 was moderate or better for the eleven self care and mobility categories. The language ICF categories and problem solving generally have fair agreement, with Kappa values ranging from 0.21 for receiving verbal messages to 0.44 for basic social interactions. Absolute agreement was above 72% for all categories. Reliability and agreement of the FIM items was generally lower than the corresponding ICF categories. The inter-rater and intra-rater reliability and agreement of the ICF activities of daily living categories were comparable or better than the corresponding FIM items. The results of this study provide an indication that the ICF categories could be used as components of rehabilitation outcome measures.
Improving optical bench radius measurements using stage error motion data.
Schmitz, Tony L; Gardner, Neil; Vaughn, Matthew; Medicus, Kate; Davies, Angela
2008-12-20
We describe the application of a vector-based radius approach to optical bench radius measurements in the presence of imperfect stage motions. In this approach, the radius is defined using a vector equation and homogeneous transformation matrix formulism. This is in contrast to the typical technique, where the displacement between the confocal and cat's eye null positions alone is used to determine the test optic radius. An important aspect of the vector-based radius definition is the intrinsic correction for measurement biases, such as straightness errors in the stage motion and cosine misalignment between the stage and displacement gauge axis, which lead to an artificially small radius value if the traditional approach is employed. Measurement techniques and results are provided for the stage error motions, which are then combined with the setup geometry through the analysis to determine the radius of curvature for a spherical artifact. Comparisons are shown between the new vector-based radius calculation, traditional radius computation, and a low uncertainty mechanical measurement. Additionally, the measurement uncertainty for the vector-based approach is determined using Monte Carlo simulation and compared to experimental results.
Quantifying soil CO2 respiration measurement error across instruments
NASA Astrophysics Data System (ADS)
Creelman, C. A.; Nickerson, N. R.; Risk, D. A.
2010-12-01
A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.
Kantonen, Samuel A; Henriksen, Niel M; Gilson, Michael K
2017-02-01
Isothermal titration calorimetry (ITC) is uniquely useful for characterizing binding thermodynamics, because it straightforwardly provides both the binding enthalpy and free energy. However, the precision of the results depends on the experimental setup and how thermodynamic results are obtained from the raw data. Experiments and Monte Carlo analysis are used to study how uncertainties in injection heat and concentration propagate to binding enthalpies in various scenarios. We identify regimes in which it is preferable to fix the stoichiometry parameter, N, and evaluate the reliability of uncertainties provided by the least squares method. The noise in the injection heat is mainly proportional in character, with ~1% and ~3% uncertainty at 27C and 65C, respectively; concentration errors are ~1%. Simulations of experiments based on these uncertainties delineate how experimental design and curve fitting methods influence the uncertainty in the final results. In most cases, experimental uncertainty is minimized by using more injections and by fixing N at its known value. With appropriate technique, the uncertainty in measured binding enthalpies can be kept below ~2% under many conditions, including low C values. We quantify uncertainties in ITC data due to heat and concentration error, and identify practices to minimize these uncertainties. The resulting guidelines are important when ITC data are used quantitatively, such as to test computer simulations of binding. Reproducibility and further study are supported by free distribution of the new software developed here. Copyright © 2016 Elsevier B.V. All rights reserved.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Inter-rater reliability of select physical examination procedures in patients with neck pain.
Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A
2014-07-01
This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.
Data Reconciliation and Gross Error Detection: A Filtered Measurement Test
Himour, Y.
2008-06-12
Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum.
How Do Raters Judge Spoken Vocabulary?
ERIC Educational Resources Information Center
Li, Hui
2016-01-01
The aim of the study was to investigate how raters come to their decisions when judging spoken vocabulary. Segmental rating was introduced to quantify raters' decision-making process. It is hoped that this simulated study brings fresh insight to future methodological considerations with spoken data. Twenty trainee raters assessed five Chinese…
ERIC Educational Resources Information Center
Leckie, George; Baird, Jo-Anne
2011-01-01
This study examined rater effects on essay scoring in an operational monitoring system from England's 2008 national curriculum English writing test for 14-year-olds. We fitted two multilevel models and analyzed: (1) drift in rater severity effects over time; (2) rater central tendency effects; and (3) differences in rater severity and central…
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Calvo, Roque; D'Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-09-29
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM's behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following:
This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
NASA Astrophysics Data System (ADS)
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.
2013-08-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable
Simulation of error in optical radar range measurements.
Der, S; Redman, B; Chellappa, R
1997-09-20
We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.
Examples of Detecting Measurement Errors with the QCRad VAP
Shi, Yan; Long, Charles N.
2005-07-30
The QCRad VAP is being developed to assess the data quality for the ARM radiation data collected at the Extended and ARCS facilities. In this study, we processed one year of radiation data, chosen at random, for each of the twenty SGP Extended Facilities to aid in determining the user configurable limits for the SGP sites. By examining yearly summary plots of the radiation data and the various test limits, we can show that the QCRad VAP is effective in identifying and detecting many different types of measurement errors. Examples of the analysis results will be shown in this poster presentation.
Examiner error in curriculum-based measurement of oral reading.
Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K
2014-08-01
Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.;
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Optimal condition for measurement observable via error-propagation
NASA Astrophysics Data System (ADS)
Zhong, Wei; Lu, Xiao Ming; Jing, Xiao Xing; Wang, Xiaoguang
2014-09-01
Propagation of error is a widely used estimation tool in experiments where the estimation precision of the parameter depends on the fluctuation of the physical observable. Thus the observable that is chosen will greatly affect the estimation sensitivity. Here we study the optimal observable for the ultimate sensitivity bounded by the quantum Cramér-Rao theorem in parameter estimation. By invoking the Schrödinger-Robertson uncertainty relation, we derive the necessary and sufficient condition for the optimal observables saturating the ultimate sensitivity for the single parameter estimate. By applying this condition to Greenberg-Horne-Zeilinger states, we obtain the general expression of the optimal observable for separable measurements to achieve the Heisenberg-limit precision and show that it is closely related to the parity measurement. However, Jose et al (2013 Phys. Rev. A 87 022330) have claimed that the Heisenberg limit may not be obtained via separable measurements. We show this claim is incorrect.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
Plain film measurement error in acute displaced midshaft clavicle fractures
Archer, Lori Anne; Hunt, Stephen; Squire, Daniel; Moores, Carl; Stone, Craig; O’Dea, Frank; Furey, Andrew
2016-01-01
Background Clavicle fractures are common and optimal treatment remains controversial. Recent literature suggests operative fixation of acute displaced mid-shaft clavicle fractures (DMCFs) shortened more than 2 cm improves outcomes. We aimed to identify correlation between plain film and computed tomography (CT) measurement of displacement and the inter- and intraobserver reliability of repeated radiographic measurements. Methods We obtained radiographs and CT scans of patients with acute DMCFs. Three orthopedic staff and 3 residents measured radiographic displacement at time zero and 2 weeks later. The CT measurements identified absolute shortening in 3 dimensions (by subtracting the length of the fractured from the intact clavicle). We then compared shortening measured on radiographs and shortening measured in 3 dimensions on CT. Interobserver and intraobserver reliability were calculated. Results We reviewed the fractures of 22 patients. Bland–Altman repeatability coefficient calculations indicated that radiograph and CT measurements of shortening could not be correlated owing to an unacceptable amount of measurement error (6 cm). Interobserver reliability for plain radiograph measurements was excellent (Cronbach α = 0.90). Likewise, intraobserver reliabilities for plain radiograph measurements as calculated with paired t tests indicated excellent correlation (p > 0.05 in all but 1 observer [p = 0.04]). Conclusion To establish shortening as an indication for DMCF fixation, reliable measurement tools are required. The low correlation between plain film and CT measurements we observed suggests further research is necessary to establish what imaging modality reliably predicts shortening. Our results indicate weak correlation between radiograph and CT measurement of acute DMCF shortening. PMID:27438054
Dealing with dietary measurement error in nutritional cohort studies.
Freedman, Laurence S; Schatzkin, Arthur; Midthune, Douglas; Kipnis, Victor
2011-07-20
Dietary measurement error creates serious challenges to reliably discovering new diet-disease associations in nutritional cohort studies. Such error causes substantial underestimation of relative risks and reduction of statistical power for detecting associations. On the basis of data from the Observing Protein and Energy Nutrition Study, we recommend the following approaches to deal with these problems. Regarding data analysis of cohort studies using food-frequency questionnaires, we recommend 1) using energy adjustment for relative risk estimation; 2) reporting estimates adjusted for measurement error along with the usual relative risk estimates, whenever possible (this requires data from a relevant, preferably internal, validation study in which participants report intakes using both the main instrument and a more detailed reference instrument such as a 24-hour recall or multiple-day food record); 3) performing statistical adjustment of relative risks, based on such validation data, if they exist, using univariate (only for energy-adjusted intakes such as densities or residuals) or multivariate regression calibration. We note that whereas unadjusted relative risk estimates are biased toward the null value, statistical significance tests of unadjusted relative risk estimates are approximately valid. Regarding study design, we recommend increasing the sample size to remedy loss of power; however, it is important to understand that this will often be an incomplete solution because the attenuated signal may be too small to distinguish from unmeasured confounding in the model relating disease to reported intake. Future work should be devoted to alleviating the problem of signal attenuation, possibly through the use of improved self-report instruments or by combining dietary biomarkers with self-report instruments.
Sia, Isaac; Carvajal, Pamela; Carnaby-Mann, Giselle D; Crary, Michael A
2012-06-01
Video fluoroscopy is commonly used in the study of swallowing kinematics. However, various procedures used in linear measurements obtained from video fluoroscopy may contribute to increased variability or measurement error. This study evaluated the influence of calibration referent and image rotation on measurement variability for hyoid and laryngeal displacement during swallowing. Inter- and intrarater reliabilities were also estimated for hyoid and laryngeal displacement measurements across conditions. The use of different calibration referents did not contribute significantly to variability in measures of hyoid and laryngeal displacement but image rotation affected horizontal measures for both structures. Inter- and intrarater reliabilities were high. Using the 95% confidence interval as the error index, measurement error was estimated to range from 2.48 to 3.06 mm. These results address procedural decisions for measuring hyoid and laryngeal displacement in video fluoroscopic swallowing studies.
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
Optical refractive synchronization: bit error rate analysis and measurement
NASA Astrophysics Data System (ADS)
Palmer, James R.
1999-11-01
The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
NASA Technical Reports Server (NTRS)
Reinath, M. S.
1985-01-01
A technique for obtaining orthogonal velocity components from nonorthogonal measurements using the NASA Ames Research Center Long-Range Laser Velocimeter (LRLV) is briefly discussed. A description is then presented of the error that occurs when these nonorthogonal measurements are spatially noncoincident because of positioning inaccuracies, and equations are developed for predicting this error. Sample data are presented and a prediction of the expected error for two typical applications is made. To cover other cases in general, a parametric study is conducted and the results are presented in a tabular format.
Characterization of Measurement Error Sources in Doppler Global Velocimetry
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.; Schwartz, Richard J.
2001-01-01
Doppler global velocimetry uses the absorption characteristics of iodine vapor to provide instantaneous three-component measurements of flow velocity within a plane defined by a laser light sheet. Although the technology is straightforward, its utilization as a flow diagnostics tool requires hardening of the optical system and careful attention to detail during data acquisition and processing if routine use in wind tunnel applications is to be achieved. A development program that reaches these goals is presented. Theoretical and experimental investigations were conducted on each technology element to determine methods that increase measurement accuracy and repeatability. Enhancements resulting from these investigations included methods to ensure iodine vapor calibration stability, single frequency operation of the laser and image alignment to sub-pixel accuracies. Methods were also developed to improve system calibration, and eliminate spatial variations of optical frequency in the laser output, spatial variations in optical transmissivity and perspective and optical distortions in the data images. Each of these enhancements is described and experimental examples given to illustrate the improved measurement performance obtained by the enhancement. The culmination of this investigation was the measured velocity profile of a rotating wheel resulting in a 1.75% error in the mean with a standard deviation of 0.5 m/s. Comparing measurements of a jet flow with corresponding Pitot measurements validated the use of these methods for flow field applications.
Error correction for Moiré based creep measurement system
NASA Astrophysics Data System (ADS)
Liao, Yi; Harding, Kevin G.; Nieters, Edward J.; Tait, Robert W.; Hasz, Wayne C.; Piche, Nicole
2014-05-01
Due to the high temperatures and stresses present in the high-pressure section of a gas turbine, the airfoils experience creep or radial stretching. Nowadays manufacturers are putting in place condition-based maintenance programs in which the condition of individual components is assessed to determine their remaining lives. To accurately track this creep effect and predict the impact on part life, the ability to accurately assess creep has become an important engineering challenge. One approach for measuring creep is using moiré imaging. Using pad-print technology, a grating pattern can be directly printed on a turbine bucket, and it compares against a reference pattern built in the creep measurement system to create moiré interference pattern. The authors assembled a creep measurement prototype for this application. By measuring the frequency change of the moiré fringes, it is then possible to determine the local creep distribution. However, since the sensitivity requirement for the creep measurement is very stringent (0.1 micron), the measurement result can be easily offset due to optical system aberrations, tilts and magnification. In this paper, a mechanical specimen subjected to a tensile test to induce plastic deformation up to 4% in the gage was used to evaluate the system. The results show some offset compared to the readings from a strain gage and an extensometer. By using a new grating pattern with two subset patterns, it was possible to correct these offset errors.
Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
Hossain, Shahadut; Gustafson, Paul
2009-05-15
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.
Kunz, Michael
2015-01-01
In this paper, three analysis procedures for repeated correlated binary data with no a priori ordering of the measurements are described and subsequently investigated. Examples for correlated binary data could be the binary assessments of subjects obtained by several raters in the framework of a clinical trial. This topic is especially of relevance when success criteria have to be defined for dedicated imaging trials involving several raters conducted for regulatory purposes. First, an analytical result on the expectation of the 'Majority rater' is presented when only the marginal distributions of the single raters are given. The paper provides a simulation study where all three analysis procedures are compared for a particular setting. It turns out that in many cases, 'Average rater' is associated with a gain in power. Settings were identified where 'Majority significant' has favorable properties. 'Majority rater' is in many cases difficult to interpret. Copyright © 2014 John Wiley & Sons, Ltd.
Explaining sexual harassment judgments: looking beyond gender of the rater.
O'Connor, Maureen; Gutek, Barbara A; Stockdale, Margaret; Geer, Tracey M; Melançon, Renée
2004-02-01
In two decades of research on sexual harassment, one finding that appears repeatedly is that gender of the rater influences judgments about sexual harassment such that women are more likely than men to label behavior as sexual harassment. Yet, sexual harassment judgments are complex, particularly in situations that culminate in legal proceedings. And, this one variable, gender, may have been overemphasized to the exclusion of other situational and rater characteristic variables. Moreover, why do gender differences appear? As work by Wiener and his colleagues have done (R. L. Wiener et al., 2002; R. L. Wiener & L. Hurt, 2000; R. L. Wiener, L. Hurt, B. Russell, K. Mannen, & C. Gasper, 1997), this study attempts to look beyond gender to answer this question. In the studies reported here, raters (undergraduates and community adults), either read a written scenario or viewed a videotaped reenactment of a sexual harassment trial. The nature of the work environment was manipulated to see what, if any, effect the context would have on gender effects. Additionally, a number of rater characteristics beyond gender were measured, including ambivalent sexism attitudes of the raters, their judgments of complainant credibility, and self-referencing that might help explain rater judgments. Respondent gender, work environment, and community vs. student sample differences produced reliable differences in sexual harassment ratings in both the written and video trial versions of the study. The gender and sample differences in the sexual harassment ratings, however, are explained by a model which incorporates hostile sexism, perceptions of the complainants credibility, and raters' own ability to put themselves in the complainant's position (self-referencing).
Regional distribution of measurement error in diffusion tensor imaging.
Marenco, Stefano; Rawlings, Robert; Rohde, Gustavo K; Barnett, Alan S; Honea, Robyn A; Pierpaoli, Carlo; Weinberger, Daniel R
2006-06-30
The characterization of measurement error is critical in assessing the significance of diffusion tensor imaging (DTI) findings in longitudinal and cohort studies of psychiatric disorders. We studied 20 healthy volunteers, each one scanned twice (average interval between scans of 51 +/- 46.8 days) with a single shot echo planar DTI technique. Intersession variability for fractional anisotropy (FA) and Trace (D) was represented as absolute variation (standard deviation within subjects: SDw), percent coefficient of variation (CV) and intra-class correlation coefficient (ICC). The values from the two sessions were compared for statistical significance with repeated measures analysis of variance or a non-parametric equivalent of a paired t-test. The results showed good reproducibility for both FA and Trace (CVs below 10% and ICCs at or above 0.70 in most regions of interest) and evidence of systematic global changes in Trace between scans. The regional distribution of reproducibility described here has implications for the interpretation of regional findings and for rigorous pre-processing. The regional distribution of reproducibility measures was different for SDw, CV and ICC. Each one of these measures reveals complementary information that needs to be taken into consideration when performing statistical operations on groups of DT images.
Defining Uncertainty and Error in Planktic Foraminiferal Oxygen Isotope Measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C.
2016-12-01
Foraminifera are the backbone of paleoceanography, and planktic foraminifera are one of the leading tools for reconstructing water column structure. Currently, there are unconstrained variables when dealing with the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate the precision and accuracy of oxygen isotope measurements. FIRM produces synthetic isotope data using parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects. Reproducibility is then tested using Monte Carlo simulations. The results from a series of experiments show that reproducibility is largely controlled by the number of individuals in each measurement, but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. Currently FIRM is a tool to estimate isotopic error values best employed in the Holocene. It is also a tool to explore the impact of myriad factors on the fidelity of paleoceanographic records. FIRM was constructed in the open-source computing environment R and is freely available via GitHub. We invite modification and expansion, and have planned inclusions for benthic foram reproducibility and stratigraphic uncertainty.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Holm, Inger; Tveter, Anne Therese; Aulie, Vibeke Smith; Stuge, Britt
2013-02-01
The aim of the present study was to evaluate the intra- and inter-tester reliability of the movement assessment battery for children-second edition (MABC-2), ageband 2. We wanted to analyze the collected data, with adequate statistical methods, to provide relevant recommendations for physical therapists who are interpreting changes in the context of daily clinical practice. Forty-five healthy children, 23 girls and 22 boys with a mean age of 8.7±0.7 years, participated in the study, the inter-tester procedures were performed the same day and the intra-tester procedures within a one to two week interval. The statistical methods used were intra-class correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable change (SDC). The children had no failed items during the tests. The ICC values ranged from 0.23 to 0.76. The items "treading lace" and "one-board balance" showed the highest measurement errors both for the intra- and inter-rater reliability. The SDC(90%) values were 9.7 and 18.5 for the intra- and inter-rater reliability, respectively. The present study showed high intra- and inter-rater chance variation MABC-2, ageband 2. A change of more than ±9.7 and ±18.5 on the total test score (TTS) should be required to state (with a 90% confidence) that a real change in a single individual has occurred, for intra- and inter-rater testing, respectively. These findings may indicate that the MABC-2 might be more suitable for diagnostic or clinical decision making purposes, than for evaluation of change over time.
Error separation technique for measuring aspheric surface based on dual probes
NASA Astrophysics Data System (ADS)
Wei, Zhong-wei; Jing, Hong-wei; Kuang, Long; Wu, Shi-bin
2013-09-01
In this paper, we present an error separation method based on dual probes for the swing arm profilometer to calibrate the rotary table errors. Two probes and the rotation axis of swinging arm are in a plane. The scanning tracks cross each other as both probes scan the mirror edge to edge. Since the surface heights should ideally be the same at these scanning crossings, this crossings height information can be used to calibrate the rotary table errors. But the crossings height information contains the swing arm air bearing errors and measurement errors of probes. The errors seriously affect the correction accuracy of rotary table errors. The swing arm air bearing errors and measurement errors of probes are randomly distributed, we use least square method to remove these errors. In this paper, we present the geometry of the dual probe swing arm profilometer system, and the profiling pattern made by both probes. We analyze the influence the probe separation has on the measurement results. The algorithm for stitching together the scans into a surface is also presented. The difference of the surface heights at the crossings of the adjacent scans is used to find a transformation that describes the rotary table errors and then to correct for the errors. To prove the error separation method based on a dual probe can successfully calibrate the rotary table errors, we establish SAP error model and simulate the effect of the error separation method based on a dual probe on calibrating the rotary table errors.
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Multiple reflections in a photoelastic modulator: errors in polarization measurement
NASA Astrophysics Data System (ADS)
Gemeiner, P.; Yang, D.; Canit, J. C.
1996-09-01
The use of a coherent light source (laser) can lead to significant errors when measurements of optical activity, magneto optical Kerr rotation, dichroism or ellipsometric parameters are down with a photoelastic modulator. In particular, a phenomenon of interferences occurs between beams arising from multiple reflections in the modulator. These interferences give rise to parasitic effects which depend on the one hand on the characteristics of the modulator and on the other hand on the wavelength of the light. A variation of temperature causes a modification of those artefacts. These have been noticed experimentally and their amplitude is in good agreement with theoretical predictions, based on a calculation of interferences. The amplitude of an artefact may reach one degree of angle in case of optical activity and is equal to five thousandth in case of measurement of a dichroism. We have shown experimentally that these effects can be cancelled by inclining the modulator with respect to the axis of the light beam or by using a new modulator with a trapezoidal section.
Dunleavy, Kim; Mariano, Herman; Wiater, Timothy; Goldberg, Allon
2010-01-01
The purposes of this study were to: 1) investigate the inter-rater and intra-rater reliability of use of the Flexicurve for measurement of spinal length (L), thoracic (TL) and lumbar length (LL), thoracic (TW) and lumbar width (LW), and 2) quantify measurement error and minimal detectable change at the 95% CI (MDC95) for the same measurements. Flexicurve measurements of the thoracolumbar spine were recorded by two examiners in standing. Intra-class correlation coefficients were calculated to determine the intra- and inter-rater reliability. Measurement error and MDC95 were calculated to determine length and width measurements that would constitute real change in spinal curvature. Thoracolumbar length (L) measurements had the highest degree of intra-rater reliability (0.93), while TL, TW, LL, LW showed moderate to good intra-rater reliability (0.61-0.80). Inter-rater reliability for all measurements was moderate (0.58-0.72). Measurement error was moderate to high for TW, LL, and LW (15-25%), and low for L and TL (1-6%). The %MDC95 for TW, LL, and LW found in this study was high (>40%), but was low for L (3.5%). Thoracolumbar length measurement with the Flexicurve showed good intra-rater reliability, low measurement error, and low MDC95 and may be a useful measure in clinical practice.
Impact of measurement error in the study of sexually transmitted infections.
Myer, L; Morroni, C; Link, B G
2004-08-01
Measurement is a fundamental part of all scientific research, and the introduction of errors of different sorts is an inevitable part of the measurement process in epidemiological and clinical research. Despite the ubiquity of measurement error in research, the substantial impacts which measurement error can have on data and subsequent study inferences are frequently overlooked. This review introduces the basic concepts of measurement error that are most relevant to the study of sexually transmitted infections, and demonstrates the impacts of several of the most common forms of measurement error on study results. A self assessment test and MCQs follow this paper.
Gómez-Cabello, Alba; Vicente-Rodríguez, Germán; Albers, Ulrike; Mata, Esmeralda; Rodriguez-Marroyo, Jose A.; Olivares, Pedro R.; Gusi, Narcis; Villa, Gerardo; Aznar, Susana; Gonzalez-Gross, Marcela; Casajús, Jose A.; Ara, Ignacio
2012-01-01
Background The elderly EXERNET multi-centre study aims to collect normative anthropometric data for old functionally independent adults living in Spain. Purpose To describe the standardization process and reliability of the anthropometric measurements carried out in the pilot study and during the final workshop, examining both intra- and inter-rater errors for measurements. Materials and Methods A total of 98 elderly from five different regions participated in the intra-rater error assessment, and 10 different seniors living in the city of Toledo (Spain) participated in the inter-rater assessment. We examined both intra- and inter-rater errors for heights and circumferences. Results For height, intra-rater technical errors of measurement (TEMs) were smaller than 0.25 cm. For circumferences and knee height, TEMs were smaller than 1 cm, except for waist circumference in the city of Cáceres. Reliability for heights and circumferences was greater than 98% in all cases. Inter-rater TEMs were 0.61 cm for height, 0.75 cm for knee-height and ranged between 2.70 and 3.09 cm for the circumferences measured. Inter-rater reliabilities for anthropometric measurements were always higher than 90%. Conclusion The harmonization process, including the workshop and pilot study, guarantee the quality of the anthropometric measurements in the elderly EXERNET multi-centre study. High reliability and low TEM may be expected when assessing anthropometry in elderly population. PMID:22860013
NASA Astrophysics Data System (ADS)
Svoboda, O.; Bach, P.; Yang, J.; Wang, C.
2006-11-01
In a real machine shop environment and under various spindle loads, the machine thermal expansion may cause large 3D volumetric positioning errors. With an intelligent controller, it is possible to compensate these errors provide that the relations between the 3D volumetric positioning errors and the temperature distribution were measured. A laser vector measurement technique developed by Optodyne was used for a quick measurement of 3D volumetric positioning errors of a CNC machining center under various spindle loads, machine movement and ambient conditions. Correlation calculations were used to determine the key temperatures and the various positioning errors. Preliminary results showed that large machine temperature changes caused somewhat small straightness error changes but large squareness error changes. Using the measured position errors, several error maps could be generated. Compensation tables at an actual thermal state can be interpolated to achieve higher accuracy at various thermal loadings.
Quantifying the sources of error in measurements of urine activity
Mozley, P.D.; Kim, H.J.; McElgin, W.
1994-05-01
Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.
Toomey, Elaine; Coote, Susan
2013-01-01
This study investigated the between-rater reliability of the Berg Balance Scale (BBS), 6-Minute Walk test (6MW), and handheld dynamometry (HHD) in people with multiple sclerosis (MS). Previous studies that examined BBS and 6MW reliability in people with MS have not used more than two raters, or analyzed different mobility levels separately. The reliability of HHD has not been previously reported for people with MS. In this study, five physical therapists assessed eight people with MS using the BBS, 6MW, and HHD, resulting in 12 pairs of data. Data were analyzed using intraclass correlation coefficients (ICCs), Spearman correlation coefficients (SCCs), and Bland and Altman methods. The results suggest excellent agreement for the BBS (SCC = 0.95, mean difference between raters [d̄] = 2.08, standard error of measurement [SEM] = 1.77) and 6MW (ICC = 0.98, d̄ = 5.22 m, SEM = 24.76 m) when all mobility levels are analyzed together. Reliability is lower in less mobile people with MS (BBS SCC = 0.6, d̄ = -1.83; 6MW ICC = 0.95, d̄ = 20.04 m). Although the ICC and SCC results for HHD suggest good-to-excellent reliability (0.65-0.85), d̄ ranges up to 17.83 N, with SEM values as high as 40.95 N. While the small sample size is a limitation of this study, the preliminary evidence suggests strong agreement between raters for the BBS and 6MW and decreased agreement between raters for people with greater mobility problems. The mean differences between raters for HHD are probably too high for it to be applied in clinical practice.
Measurement error in grip and pinch force measurements in patients with hand injuries.
Schreuders, Ton A R; Roebroeck, Marij E; Goumans, Janine; van Nieuwenhuijzen, Johan F; Stijnen, Theo H; Stam, Henk J
2003-09-01
There is limited documentation of measurement error of grip and pinch force evaluation methods. The purposes of this study were (1) to determine indexes of measurement error for intraexaminer and interexaminer measurements of grip and pinch force in patients with hand injuries and (2) to investigate whether the measurement error differs between measurements of the injured and noninjured hands and between experienced and inexperienced examiners. The subjects were a consecutive sample of 33 patients with hand injuries who were seen in the Department of Rehabilitation Medicine of Erasmus MC-University Medical Center Rotterdam in the Netherlands. Repeated measurements were taken of grip and pinch force, with a short break of 2 to 3 minutes between sessions. For the grip force in 2 handle positions (distance between handles of 4.6 and 7.2 cm, respectively), tip pinch (with the index finger on top and the thumb below, with the other fingers flexed) and key pinch force (with the thumb on top and the radial side of the index finger below) data were obtained on both hands of the subjects by an experienced examiner and an inexperienced examiner. Intraclass correlation coefficients (ICCs), standard errors of measurement (SEMs), and associated smallest detectable differences (SDDs) were calculated and compared with data from previous studies. The reliability of the measurements was expressed by ICCs between .82 and .97. For grip force measurements (in the second handle position) by the experienced examiner, an SDD of 61 N was found. For tip pinch and key pinch, these values were 12 N and 11 N, respectively. For measurements by the inexperienced examiner, SDDs of 56 N for grip force and 13 N and 18 N for tip pinch and key pinch were found. Based on the SEMs and SDDs, in individual patients only relatively large differences in grip and pinch force measurements can be adequately detected between consecutive measurements. Measurement error did not differ between injured and
On the reliability and standard errors of measurement of contrast measures from the D-KEFS.
Crawford, John R; Sutherland, David; Garthwaite, Paul H
2008-11-01
A formula for the reliability of difference scores was used to estimate the reliability of Delis-Kaplan Executive Function System (D-KEFS; Delis et al., 2001) contrast measures from the reliabilities and correlations of their components. In turn these reliabilities were used to calculate standard errors of measurement. The majority of contrast measures had low reliabilities: of the 51 reliability coefficients calculated in the present study, none exceeded 0.7 and hence all failed to meet any of the criteria for acceptable reliability proposed by various experts in psychological measurement. The mean reliability of the contrast scores was 0.27, the median reliability was 0.30. The standard errors of measurement were large and, in many cases, equaled or were only marginally smaller than the contrast scores' standard deviations. The results suggest that, at present, D-KEFS contrast measures should not be used in neuropsychological decision making.
Rater Types in Writing Performance Assessments: A Classification Approach to Rater Variability
ERIC Educational Resources Information Center
Eckes, Thomas
2008-01-01
Research on rater effects in language performance assessments has provided ample evidence for a considerable degree of variability among raters. Building on this research, I advance the hypothesis that experienced raters fall into types or classes that are clearly distinguishable from one another with respect to the importance they attach to…
Effects of Marking Method and Rater Experience on ESL Essay Scores and Rater Performance
ERIC Educational Resources Information Center
Barkaoui, Khaled
2011-01-01
This study examined the effects of marking method and rater experience on ESL (English as a Second Language) essay test scores and rater performance. Each of 31 novice and 29 experienced raters rated a sample of ESL essays both holistically and analytically. Essay scores were analysed using a multi-faceted Rasch model to compare test-takers'…
A Hierarchical Rater Model for Constructed Responses, with a Signal Detection Rater Model
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.; Kim, YoungKoung; Johnson, Matthew S.
2011-01-01
The hierarchical rater model (HRM) recognizes the hierarchical structure of data that arises when raters score constructed response items. In this approach, raters' scores are not viewed as being direct indicators of examinee proficiency but rather as indicators of essay quality; the (latent categorical) quality of an examinee's essay in turn…
Weight-Based Classification of Raters and Rater Cognition in an EFL Speaking Test
ERIC Educational Resources Information Center
Cai, Hongwen
2015-01-01
This study is an attempt to classify raters according to their weighting patterns and explore systematic differences between rater types in the rating process. In the context of an EFL speaking test, 126 raters were classified into three types--form-oriented, balanced, and content-oriented--through cluster analyses of their weighting patterns…
Weight-Based Classification of Raters and Rater Cognition in an EFL Speaking Test
ERIC Educational Resources Information Center
Cai, Hongwen
2015-01-01
This study is an attempt to classify raters according to their weighting patterns and explore systematic differences between rater types in the rating process. In the context of an EFL speaking test, 126 raters were classified into three types--form-oriented, balanced, and content-oriented--through cluster analyses of their weighting patterns…
Variance Estimation of Nominal-Scale Inter-Rater Reliability with Random Selection of Raters
ERIC Educational Resources Information Center
Gwet, Kilem Li
2008-01-01
Most inter-rater reliability studies using nominal scales suggest the existence of two populations of inference: the population of subjects (collection of objects or persons to be rated) and that of raters. Consequently, the sampling variance of the inter-rater reliability coefficient can be seen as a result of the combined effect of the sampling…
Another Look at Inter-Rater Agreement. Research Report.
ERIC Educational Resources Information Center
Zwick, Rebecca
Most currently used measures of inter-rater agreement for the nominal case incorporate a correction for "chance agreement." The definition of chance agreement is not the same for all coefficients, however. Three chance-corrected coefficients are Cohen's Kappa; Scott's Pi; and the S index of Bennett, Goldstein, and Alpert, which has…
Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere
2013-01-01
systematic errors in the travel times that may be significantly larger than random errors and, thus, can significantly affect the quality of the...tomography of the atmosphere. First, the systematic errors are caused by the errors in measurements of the time delays of signal propaga- tion in...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals . For example, if
Cleffken, Berry; van Breukelen, Gerard; van Mameren, Henk; Brink, Peter; Olde Damink, Steven
2007-01-01
Increasingly, goniometry of elbow motion is used for qualification of research results. Expression of reliability is in parameters not suitable for comparison of results. We modified Bland and Altman's method, resulting in the smallest detectable differences (SDDs). Two raters measured elbow excursions in 42 individuals (144 ratings per test person) with an electronic digital inclinometer in a classical test-retest crossover study design. The SDDs were 0 +/- 4.2 degrees for active extension; 0 +/- 8.2 degrees for active flexion, both without upper arm fixation; 0 +/- 6.3 degrees for active extension; 0 +/- 5.7 degrees for active flexion; 0 +/- 7.4 degrees for passive flexion with upper arm fixation; 0 +/- 10.1 degrees for active flexion with upper arm retroflexion; and 0 +/- 8.5 degrees and 0 +/- 10.8 degrees for active and passive range of motion. Differences smaller than these SDDs found in clinical or research settings are attributable to measurement error and do not indicate improvement.
Properties of a Proposed Approximation to the Standard Error of Measurement.
ERIC Educational Resources Information Center
Nitko, Anthony J.
An approximation formula for the standard error of measurement was recently proposed by Garvin. The properties of this approximation to the standard error of measurement are described in this paper and illustrated with hypothetical data. It is concluded that the approximation is a systematic overestimate of the standard error of measurement…
Implications of Three Causal Models for the Measurement of Halo Error.
ERIC Educational Resources Information Center
Fisicaro, Sebastiano A.; Lance, Charles E.
1990-01-01
Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)
Implications of Three Causal Models for the Measurement of Halo Error.
ERIC Educational Resources Information Center
Fisicaro, Sebastiano A.; Lance, Charles E.
1990-01-01
Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)
O'Sullivan, Kieran; Galeotti, Luciana; Dankaerts, Wim; O'Sullivan, Leonard; O'Sullivan, Peter
2011-01-01
Lumbar posture is commonly assessed in non-specific chronic low back pain (NSCLBP), although quantitative measures have mostly been limited to laboratory environments. The BodyGuard™ is a spinal position monitoring device that can monitor posture in real time, both inside and outside the laboratory. The reliability of this wireless device was examined in 18 healthy participants during usual sitting and forward bending, two tasks that are commonly provocative in NSCLBP. Reliability was determined using intraclass correlation coefficients (ICC), the standard error of measurement (SEM), the mean difference and the minimal detectable change (MDC90). Between-day ICC values ranged from 0.84 to 0.87, with small SEM (5%), mean difference (<9%) and MDC90 (<14%) values. Inter-rater ICC values ranged from 0.91 to 0.94, with small SEM (4%), mean difference (6%) and MDC90 (9%) values. Between-day and inter-rater reliability are essential requirements for clinical utility and were excellent in this study. Further studies into the validity of this device and its application in clinical trials in occupational settings are required. STATEMENT OF RELEVANCE: A novel device that can analyse spinal posture exposure in occupational settings in a minimally invasive manner has been developed. This study established that the device has excellent between-day and inter-rater reliability in healthy pain-free subjects. Further studies in people with low back pain are planned.
Sedrez, Juliana A.; Candotti, Cláudia T.; Rosa, Maria I. Z.; Medeiros, Fernanda S.; Marques, Mariana T.; Loss, Jefferson F.
2016-01-01
Introduction: The early evaluation of the spine in children is desirable because it is at this stage of development that the greatest changes in the body structures occur. Objective: To determine the test-retest, intra- and inter-rater reliability of the Flexicurve instrument for the evaluation of spinal curvatures in children. Method: Forty children ranging from 5 to 15 years of age were evaluated by two independent evaluators using the Flexicurve to model the spine. The agreement was evaluated using Intraclass Correlation Coefficients (ICC), Standard Error of the Measurement (SEM), and Minimal Detectable Change (MDC). Results: In relation to thoracic kyphosis, the Flexicurve was shown to have excellent correlation in terms of test-retest reliability (ICC2,2=0.87) and moderate correlation in terms of intra-(ICC2,2=0.68) and inter-rater reliability (ICC2,2=0.72). In relation to lumbar lordosis, it was shown to have moderate correlation in terms of test-retest reliability (ICC2,2=0.66) and intra- (ICC2,2=0.50) and inter-rater reliability (ICC=0.56). Conclusion: This evaluation of the reliability of the Flexicurve allows its use in school screening. However, to monitor spinal curvatures in the sagittal plane in children, complementary clinical measures are necessary. Further studies are required to investigate the concurrent validity of the instrument in order to identify its diagnostic capacity. PMID:26786078
Correlates of Halo Error in Teacher Evaluation.
ERIC Educational Resources Information Center
Moritsch, Brian G.; Suter, W. Newton
1988-01-01
An analysis of 300 undergraduate psychology student ratings of teachers was undertaken to assess the magnitude of halo error and a variety of rater, ratee, and course characteristics. The raters' halo errors were significantly related to student effort in the course, previous experience with the instructor, and class level. (TJH)
Correlates of Halo Error in Teacher Evaluation.
ERIC Educational Resources Information Center
Moritsch, Brian G.; Suter, W. Newton
1988-01-01
An analysis of 300 undergraduate psychology student ratings of teachers was undertaken to assess the magnitude of halo error and a variety of rater, ratee, and course characteristics. The raters' halo errors were significantly related to student effort in the course, previous experience with the instructor, and class level. (TJH)
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2014-08-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.
Rater Variables Associated with ITER Ratings
ERIC Educational Resources Information Center
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-01-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of…
Accuracy of Surgery Clerkship Performance Raters.
ERIC Educational Resources Information Center
Littlefield, John H.; And Others
1991-01-01
Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)
Agreement between Two Independent Groups of Raters
ERIC Educational Resources Information Center
Vanbelle, Sophie; Albert, Adelin
2009-01-01
We propose a coefficient of agreement to assess the degree of concordance between two independent groups of raters classifying items on a nominal scale. This coefficient, defined on a population-based model, extends the classical Cohen's kappa coefficient for quantifying agreement between two raters. Weighted and intraclass versions of the…
Effects of Assigning Raters to Items
ERIC Educational Resources Information Center
Sykes, Robert C.; Ito, Kyoko; Wang, Zhen
2008-01-01
Student responses to a large number of constructed response items in three Math and three Reading tests were scored on two occasions using three ways of assigning raters: single reader scoring, a different reader for each response (item-specific), and three readers each scoring a rater item block (RIB) containing approximately one-third of a…
Agreement between Two Independent Groups of Raters
ERIC Educational Resources Information Center
Vanbelle, Sophie; Albert, Adelin
2009-01-01
We propose a coefficient of agreement to assess the degree of concordance between two independent groups of raters classifying items on a nominal scale. This coefficient, defined on a population-based model, extends the classical Cohen's kappa coefficient for quantifying agreement between two raters. Weighted and intraclass versions of the…
Accuracy of Surgery Clerkship Performance Raters.
ERIC Educational Resources Information Center
Littlefield, John H.; And Others
1991-01-01
Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)
Effects of Assigning Raters to Items
ERIC Educational Resources Information Center
Sykes, Robert C.; Ito, Kyoko; Wang, Zhen
2008-01-01
Student responses to a large number of constructed response items in three Math and three Reading tests were scored on two occasions using three ways of assigning raters: single reader scoring, a different reader for each response (item-specific), and three readers each scoring a rater item block (RIB) containing approximately one-third of a…
Hou, Maosheng; Qiu, Lirong; Zhao, Weiqian; Wang, Fan; Liu, Entao; Ji, Lin
2014-01-20
To improve the measurement accuracy of the profilometer for large optical surfaces, a new single-step spatial rotation error separation technique (SSEST) is proposed to separate the surface profile error and spindle spatial rotation error, and a novel SSEST-based system for surface profile measurement is developed. In the process of separation, two sets of measured results at the ith measurement circle are obtained before and after the rotation of error separation table, the surface profile error and spatial rotation error of spindle can be determined using discrete Fourier-transform and harmonic analysis. Theoretical analyses and experimental results indicate that SSEST can accurately separate spatial rotation error of spindle from the measured surface profile results within the range of 1-100 upr and improve the accuracy of surface profile measurements.
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-05-19
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.
A heteroscedastic measurement error model for method comparison data with replicate measurements.
Nawarathna, Lakshika S; Choudhary, Pankaj K
2015-03-30
Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset.
Fell, Matthew; Meirte, Jill; Anthonissen, Mieke; Maertens, Koen; Pleat, Jonathon; Moortgat, Peter
2016-03-01
Objective scar assessment tools were designed to help identify problematic scars and direct clinical management. Their use has been restricted by their measurement of a single scar property and the bulky size of equipment. The Scarbase Duo(®) was designed to assess both trans-epidermal water loss (TEWL) and colour of a burn scar whilst being compact and easy to use. Twenty patients with a burn scar were recruited and measurements taken using the Scarbase Duo(®) by two observers. The Scarbase Duo(®) measures TEWL via an open-chamber system and undertakes colorimetry via narrow-band spectrophotometry, producing values for relative erythema and melanin pigmentation. Validity was assessed by comparing the Scarbase Duo(®) against the Dermalab(®) and the Minolta Chromameter(®) respectively for TEWL and colorimetry measurements. The intra-class correlation coefficient (ICC) was used to assess reliability with standard error of measurement (SEM) used to assess reproducibility of measurements. The Pearson correlation coefficient (r) was used to assess the convergent validity. The Scarbase Duo(®) TEWL mode had excellent reliability when used on scars for both intra- (ICC=0.95) and inter-rater (ICC=0.96) measurements with moderate SEM values. The erythema component of the colorimetry mode showed good reliability for use on scars for both intra-(ICC=0.81) and inter-rater (ICC=0.83) measurements with low SEM values. Pigmentation values showed excellent reliability on scar tissue for both intra- (ICC=0.97) and inter-rater (ICC=0.97) with moderate SEM values. The Scarbase Duo(®) TEWL function had excellent correlation with the Dermalab(®) (r=0.93) whilst the colorimetry erythema value had moderate correlation with the Minolta Chromameter (r=0.72). The Scarbase Duo(®) is a reliable and objective scar assessment tool, which is specifically designed for burn scars. However, for clinical use, standardised measurement conditions are recommended.
Assessing and quantifying inter-rater variation for dichotomous ratings using a Rasch model.
Petersen, Jørgen Holm; Larsen, Klaus; Kreiner, Svend
2012-12-01
We present a new model-based approach to the analysis of agreement between raters in a situation where all raters have supplied dichotomous ratings of the same cases in a sample. The model is a logistic regression model with random effects--a Rasch model. In the rater setting, the Rasch model includes parameters that allow raters to have different propensities to score a given set of individuals positively or negatively--the rater bias. An exact score test of the hypothesis of no rater bias is proposed and is shown to be an exact generalised McNemar's test. Based on the model, we suggest quantifying the rater variation as a suitable measure of the variation of the rater odds ratios. An important example that will serve to motivate and illustrate the proposed model, is the study of Umbilical artery Doppler velocimetry used by obstetricians to assess the status of a foetus. The purpose of the assessment is to improve the foetus' chance of survival by choosing the optimal time of elective delivery. In the study, data related to 139 perinatal deaths were sent to 32 experts who were asked whether the use of Doppler velocimetry might have prevented each death.
Exploring the role of first impressions in rater-based assessments.
Wood, Timothy J
2014-08-01
Medical education relies heavily on assessment formats that require raters to assess the competence and skills of learners. Unfortunately, there are often inconsistencies and variability in the scores raters assign. To ensure the scores from these assessment tools have validity, it is important to understand the underlying cognitive processes that raters use when judging the abilities of their learners. The goal of this paper, therefore, is to contribute to a better understanding of the cognitive processes used by raters. Representative findings from the social judgment and decision making, cognitive psychology, and educational measurement literature will be used to enlighten the underpinnings of these rater-based assessments. Of particular interest is the impact judgments referred to as first impressions (or thin slices) have on rater-based assessments. These are judgments about people made very quickly and based on very little information. A narrative review will provide a synthesis of research in these three literatures (social judgment and decision making, educational psychology, and cognitive psychology) and will focus on the underlying cognitive processes, the accuracy and the impact of first impressions on rater-based assessments. The application of these findings to the types of rater-based assessments used in medical education will then be reviewed. Gaps in understanding will be identified and suggested directions for future research studies will be discussed.
Paulsen, Robert; Gallu, Tommaso; Gilkey, David; Reiser, Raoul; Murgia, Lelia; Rosecrance, John
2015-11-01
The purpose of this study was to characterize the inter-rater reliability of two physical exposure assessment methods of the upper extremity, the Strain Index (SI) and Occupational Repetitive Actions (OCRA) Checklist. These methods are commonly used in occupational health studies and by occupational health practitioners. Seven raters used the SI and OCRA Checklist to assess task-level physical exposures to the upper extremity of workers performing 21 cheese manufacturing tasks. Inter-rater reliability was characterized using a single-measure, agreement-based intraclass correlation coefficient (ICC). Inter-rater reliability of SI assessments was moderate to good (ICC = 0.59, 95% CI: 0.45-0.73), a similar finding to prior studies. Inter-rater reliability of OCRA Checklist assessments was excellent (ICC = 0.80, 95% CI: 0.70-0.89). Task complexity had a small, but non-significant, effect on inter-rater reliability SI and OCRA Checklist scores. Both the SI and OCRA Checklist assessments possess adequate inter-rater reliability for the purposes of occupational health research and practice. The OCRA Checklist inter-rater reliability scores were among the highest reported in the literature for semi-quantitative physical exposure assessment tools of the upper extremity. The OCRA Checklist however, required more training time and time to conduct the risk assessments compared to the SI.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, W. S.; Burkhart, J. F.; Kylling, A.
2015-08-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Introducing a new definition of a near fall: intra-rater and inter-rater reliability.
Maidan, I; Freedman, T; Tzemah, R; Giladi, N; Mirelman, A; Hausdorff, J M
2014-01-01
Near falls (NFs) are more frequent than falls, and may occur before falls, potentially predicting fall risk. As such, identification of a NF is important. We aimed to assess intra and inter-rater reliability of the traditional definition of a NF and to demonstrate the potential utility of a new definition. To this end, 10 older adults, 10 idiopathic elderly fallers, and 10 patients with Parkinson's disease (PD) walked in an obstacle course while wearing a safety harness. All walks were videotaped. Forty-nine video segments were extracted to create 2 clips each of 8.48 min. Four raters scored each event using the traditional definition and, two weeks later, using the new definition. A fifth rater used only the new definition. Intra-rater reliability was determined using Kappa (K) statistics and inter-rater reliability was determined using ICC. Using the traditional definition, three raters had poor intra-rater reliability (K<0.054, p>0.137) and one rater had moderate intra-rater reliability (K=0.624, p<0.001). With the traditional definition, inter-rater reliability between the four raters was moderate (ICC=0.667, p<0.001). In contrast, the new NF definition showed high intra-rater (K>0.601, p<0.001) and excellent inter-rater reliability (ICC=0.815, p<0.001). A priori, it is easy to distinguish falls from usual walking and NFs, but it is more challenging to distinguish NFs from obstacle negotiation and usual walking. Therefore, a more precise definition of NF is required. The results of the present study suggest that the proposed new definition increases intra and inter-rater reliability, a critical step for using NFs to quantify fall risk. Copyright © 2013 Elsevier B.V. All rights reserved.
Cole, David A; Preacher, Kristopher J
2014-06-01
Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.
Linden, Ariel
2015-01-01
The patient activation measure (PAM) is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve), using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE) for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum) and 44.4% (maximum). Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification of the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross
(Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2015-12-01
Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.
The intra-rater reliability of a revised 3-point grading system for accessory joint mobilizations.
Ward, Jennifer; Hebron, Clair; Petty, Nicola J
2017-09-01
Joint mobilizations are often quantified using a 4-point grading system based on the physiotherapist's detection of resistance. It is suggested that the initial resistance to joint mobilizations is imperceptible to physiotherapists, but that at some point through range becomes perceptible, a point termed R1. Grades of mobilization traditionally hinge around this concept and are performed either before or after R1. Physiotherapists, however, show poor reliability in applying grades of mobilization. The definition of R1 is ambiguous and dependent on the skills of the individual physiotherapist. The aim of this study is to test a revised grading system where R1 is considered at the beginning of range, and the entire range, as perceived by the physiotherapists maximum force application, is divided into three, creating 3 grades of mobilization. Thirty-two post-registration physiotherapists and nineteen pre-registration students assessed end of range (point R2) and then applied 3 grades of AP mobilizations, over the talus, in an asymptomatic models ankle. Vertical forces were recorded through a force platform. Intra-class Correlation Coefficients, Standard Error of Measurement, and Minimal Detectable Change were calculated to explore intra-rater reliability on intra-day and inter-day testing. T-tests determined group differences. Intra-rater reliability was excellent for intra-day testing (ICC 0.96-0.97), and inter-day testing (ICC 0.85-0.93). No statistical difference was found between pre- and post-registration groups. Standardizing the definition of grades of mobilization, by moving R1 to the beginning of range and separating grades into thirds, results in excellent intra-rater reliability on intra-day and inter-day tests. 3b.
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Study on error analysis and accuracy improvement for aspheric profile measurement
NASA Astrophysics Data System (ADS)
Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou
2017-06-01
Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.
Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors
NASA Astrophysics Data System (ADS)
Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping
2016-11-01
The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.
Sonderegger, Derek L; Wang, Haonan; Huang, Yao; Clements, William H
2009-10-01
The effect that measurement error of predictor variables has on regression inference is well known in the statistical literature. However, the influence of measurement error on the ability to quantify relationships between chemical stressors and biological responses has received little attention in ecotoxicology. We present a common data-collection scenario and demonstrate that the relationship between explanatory and response variables is consistently underestimated when measurement error is ignored. A straightforward extension of the regression calibration method is to use a nonparametric method to smooth the predictor variable with respect to another covariate (e.g., time) and using the smoothed predictor to estimate the response variable. We conducted a simulation study to compare the effectiveness of the proposed method to the naive analysis that ignores measurement error. We conclude that the method satisfactorily addresses the problem when measurement error is moderate to large, and does not result in a noticeable loss of power in the case where measurement error is absent.
Introducing a new definition of a near fall: Intra-rater and inter-rater reliability
Maidan, I; Freedman, T; Tzemah, R; Giladi, N; Mirelman, A; Hausdorff, JM
2013-01-01
Near falls (NFs) are more frequent than falls, and may occur before falls, potentially predicting fall risk. As such identification of a NF is important. We aimed to assess intra and inter-rater reliability of the traditional definition of a NF and to demonstrate the potential utility of a new definition. To this end, 10 older adults, 10 idiopathic elderly fallers, and 10 patients with Parkinson’s disease (PD) walked in an obstacle course while wearing a safety harness. All walks were videotaped. 49 video segments were extracted to create 2 clips each of 8.48 minutes. Four raters scored each event using the traditional definition and, two weeks later, using the new definition. A fifth rater used only the new definition. Intrarater reliability was determined using Kappa (K) statistics and inter-rater reliability was determined using ICC. Using the traditional definition, three raters had poor intra-rater reliability (K<0.054, p>0.137) and one rater had moderate intra-rater reliability (K=0.624, p<0.001). With the traditional definition, inter-rater reliability between the four raters was moderate (ICC=0.667, p<0.001). In contrast, the new NF definition showed high intra-rater (K>0.601, p<0.001) and high inter-rater reliability (ICC=0.815, p<0.001). A priori, it is easy to distinguish falls from usual walking and NFs, but it is more challenging to distinguish NFs from obstacle negotiation and usual walking. Therefore, a more precise definition of NF is required. The results of the present study suggest that the proposed new definition increases intra and inter-rater reliability, a critical step for using NFs to quantify fall risk. PMID:23972512
2013-01-01
Background Diagrammatic recording of finger joint angles by using two criss-crossed paper strips can be a quick substitute to the standard goniometry. As a preliminary step toward clinical validation of the diagrammatic technique, the current study employed healthy subjects and non-professional raters to explore whether reliability estimates of the diagrammatic goniometry are comparable with those of the standard procedure. Methods The study included two procedurally different parts, which were replicated by assigning 24 medical students to act interchangeably as 12 subjects and 12 raters. A larger component of the study was designed to compare goniometers side-by-side in measurement of finger joint angles varying from subject to subject. In the rest of the study, the instruments were compared by parallel evaluations of joint angles similar for all subjects in a situation of simulated change of joint range of motion over time. The subjects used special guides to position the joints of their left ring finger at varying angles of flexion and extension. The obtained diagrams of joint angles were converted to numerical values by computerized measurements. The statistical approaches included calculation of appropriate intraclass correlation coefficients, standard errors of measurements, proportions of measurement differences of 5 or less degrees, and significant differences between paired observations. Results Reliability estimates were similar for both goniometers. Intra-rater and inter-rater intraclass correlation coefficients ranged from 0.69 to 0.93. The corresponding standard errors of measurements ranged from 2.4 to 4.9 degrees. Repeated measurements of a considerable number of raters fell within clinically non-meaningful 5 degrees of each other in proportions comparable with a criterion value of 0.95. Data collected with both instruments could be similarly interpreted in a simulated situation of change of joint range of motion over time. Conclusions The paper
Ali, Zulfiqar; Yashchuk, Valeriy V.
2011-05-11
Systematic error and instrumental drift are the major limiting factors of sub-microradian slope metrology with state-of-the-art x-ray optics. Significant suppression of the errors can be achieved by using an optimal measurement strategy suggested in [Rev. Sci. Instrum. 80, 115101 (2009)]. With this series of LSBL Notes, we report on development of an automated, kinematic, rotational system that provides fully controlled flipping, tilting, and shifting of a surface under test. The system is integrated into the Advanced Light Source long trace profiler, LTP-II, allowing for complete realization of the advantages of the optimal measurement strategy method. We provide details of the system’s design, operational control and data acquisition. The high performance of the system is demonstrated via the results of high precision measurements with a spherical test mirror.
Errors in scatterometer-radiometer wind measurement due to rain
NASA Technical Reports Server (NTRS)
Moore, R. K.; Chaudhry, A. H.; Birrer, I. J.
1983-01-01
The behavior of radiometer corrections for the scatterometer is investigated by simulating simple situations using footprint sizes comparable with those used in the SEASAT-1 experiment and also actual footprints and rain rates from a hurricane observed by the SEASAT-1 system. The effects on correction due to attenuation and wind speed gradients are examined independently and jointly. It is shown that the error in the wind-speed estimate can be as large as 200% at higher wind speeds. The worst error occurs when the scatterometer footprint overlaps two or more radiometer footprints and the attenuation in the scatterometer footprint differs greatly from those in parts of the radiometer footprints. This problem could be overcome by using a true radiometer-scatterometer system having identical coincident footprints comparable in size with typical rain cells.
Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement
NASA Astrophysics Data System (ADS)
Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui
2017-01-01
Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.
Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer
2014-01-01
National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441
Calvo, Roque; D'Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-10-14
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities.
Lim, Hoon Chin Steven; Salandanan, Edgar Azada; Phillips, Rachel; Tan, Jun Guan; Hezan, Muhammad Azmi
2015-10-01
Identification of the J-point and measurement of ST segment elevation at the J-point are important for the diagnosis of ST-elevation myocardial infarction (STEMI). We conducted a study to determine the inter-rater reliability (IRR) of J-point location and measurement of the magnitude of ST elevation at the J-point on ECGs of patients with STEMI by emergency department (ED) doctors. Each participant examined 20 STEMI ECGs during a 1-month period in 2013. The participants were required to locate the J-point by selecting the small 1 mm square within which the J-point is located and measure the magnitude of ST elevation at the J-point identified (rounded up to the nearest 0.5 mm). The intraclass correlation coefficient (ICC) was calculated to assess the IRR. Thirty doctors participated. The ICC assessing the degree to which all participants provided agreement in their assessment of the location of J-points across ECGs was 0.85 (95% CI 0.75 to 0.93), which is in the excellent range. The ICC for assessing the magnitude of ST elevation was 0.97 (95% CI 0.94 to 0.98), indicating excellent agreement as well. ED doctors show a high level of agreement when determining the location of J-points and measuring the magnitude of ST elevation at those J-points on ECGs of patients with STEMI. The findings support the measurement of ST segment elevation at the J-point in STEMI cases and should be regarded as a consistent standard to avoid confusion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1980-01-01
Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.
Measuring nursing error: psychometrics of MISSCARE and practice and professional issues items.
Castner, Jessica; Dean-Baar, Susan
2014-01-01
Health care error causes inpatient morbidity and mortality. This study pooled the items from preexisting nursing error questionnaires and tested the psychometric properties of modified subscales from these item combinations. Items from MISSCARE Part A, Part B, and the Practice and Professional Issues were collected from 556 registered nurses. Principal component analyses were completed for items measuring (a) nursing error and (b) antecedents to error. Acceptable factor loadings and internal consistency reliability (.70-.89) were found for subscales Acute Care Missed Nursing Care, Errors of Commission, Workload, Supplies Problems, and Communication Problems. The findings support the use of 5 subscales to measure nursing error and antecedents to error in various inpatient unit types with acceptable validity and reliability. The Activities of Daily Living (ADL) Omissions subscale is not appropriate for all inpatient unit types.
Inter- and intra-rater reliability of the GAITRite system among individuals with sub-acute stroke.
Wong, Jennifer S; Jasani, Hardika; Poon, Vivien; Inness, Elizabeth L; McIlroy, William E; Mansfield, Avril
2014-01-01
Technology-based assessment tools with semi-automated processing, such as pressure-sensitive mats used for gait assessment, may be considered to be objective; therefore it may be assumed that rater reliability is not a concern. However, user input is often required and rater reliability must be determined. The purpose of this study was to assess the inter- and intra-rater reliability of spatial and temporal characteristics of gait in stroke patients using the GAITRite system. Forty-six individuals with stroke attending in-patient rehabilitation walked across the pressure-sensitive mat 2-4 times at preferred walking speeds, with or without a gait aid. Five raters independently processed gait data. Three raters re-processed the data after a delay of at least one month. The intraclass correlation coefficients (ICC) and 95% confidence intervals of the ICC were determined for velocity, step time, step length, and step width. Inter-rater reliability for velocity, step time, and step length were high (ICC>0.90). Intra-rater reliability was generally greater than inter-rater reliability (from 0.81 to >0.99 for inter-rater versus 0.77 to >0.99 for intra-rater reliability). Overall, this study suggests that GAITRite is a reliable assessment tool; however, there still remains subjectivity in processing the data, resulting in no patients with perfect agreement between raters. Additional logic checking within the processing software or standardization of training could help to reduce potential errors in processing.
van Lummel, Rob C; Walgaard, Stefan; Hobert, Markus A; Maetzler, Walter; van Dieën, Jaap H; Galindo-Garre, Francisca; Terwee, Caroline B
2016-01-01
The "Timed Up and Go" (TUG) is a widely used measure of physical functioning in older people and in neurological populations, including Parkinson's Disease. When using an inertial sensor measurement system (instrumented TUG [iTUG]), the individual components of the iTUG and the trunk kinematics can be measured separately, which may provide relevant additional information. The aim of this study was to determine intra-rater, inter-rater and test-retest reliability of the iTUG in patients with Parkinson's Disease. Twenty eight PD patients, aged 50 years or older, were included. For the iTUG the DynaPort Hybrid (McRoberts, The Hague, The Netherlands) was worn at the lower back. The device measured acceleration and angular velocity in three directions at a rate of 100 samples/s. Patients performed the iTUG five times on two consecutive days. Repeated measurements by the same rater on the same day were used to calculate intra-rater reliability. Repeated measurements by different raters on the same day were used to calculate intra-rater and inter-rater reliability. Repeated measurements by the same rater on different days were used to calculate test-retest reliability. Nineteen ICC values (15%) were ≥ 0.9 which is considered as excellent reliability. Sixty four ICC values (49%) were ≥ 0.70 and < 0.90 which is considered as good reliability. Thirty one ICC values (24%) were ≥ 0.50 and < 0.70, indicating moderate reliability. Sixteen ICC values (12%) were ≥ 0.30 and < 0.50 indicating poor reliability. Two ICT values (2%) were < 0.30 indicating very poor reliability. In conclusion, in patients with Parkinson's disease the intra-rater, inter-rater, and test-retest reliability of the individual components of the instrumented TUG (iTUG) was excellent to good for total duration and for turning durations, and good to low for the sub durations and for the kinematics of the SiSt and StSi. The results of this fully automated analysis of instrumented TUG movements demonstrate
Rating the raters in a mixed model: An approach to deciphering the rater reliability
NASA Astrophysics Data System (ADS)
Shang, Junfeng; Wang, Yougui
2013-05-01
Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.
Compensation method for the alignment angle error of a gear axis in profile deviation measurement
NASA Astrophysics Data System (ADS)
Fang, Suping; Liu, Yongsheng; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryuhei
2013-05-01
In the precision measurement of involute helical gears, the alignment angle error of a gear axis, which was caused by the assembly error of a gear measuring machine, will affect the measurement accuracy of profile deviation. A model of the involute helical gear is established under the condition that the alignment angle error of the gear axis exists. Based on the measurement theory of profile deviation, without changing the initial measurement method and data process of the gear measuring machine, a compensation method is proposed for the alignment angle error of the gear axis that is included in profile deviation measurement results. Using this method, the alignment angle error of the gear axis can be compensated for precisely. Some experiments that compare the residual alignment angle error of a gear axis after compensation for the initial alignment angle error were performed to verify the accuracy and feasibility of this method. Experimental results show that the residual alignment angle error of a gear axis included in the profile deviation measurement results is decreased by more than 85% after compensation, and this compensation method significantly improves the measurement accuracy of the profile deviation of involute helical gear.
Inter- and intra-rater agreement of static posture analysis using a mobile application
Boland, David M.; Neufeld, Eric V.; Ruddell, Jack; Dolezal, Brett A.; Cooper, Christopher B.
2016-01-01
[Purpose] To determine the intra- and inter-rater agreement of a mobile application, PostureScreen Mobile® (PSM), that assesses static standing posture. [Subjects and Methods] Three examiners with different levels of experience of assessing posture, one licensed physical therapist and two untrained undergraduate students, performed repeated postural assessments of 10 subjects, fully clothed or minimally clothed, using PSM on two nonconsecutive days. Anterior and right lateral images were captured and seventeen landmarks were identified on them. Intraclass correlation coefficients (ICCs) were calculated for each of 13 postural measures to evaluate inter-rater agreement on the first visit (fully or minimally clothed), as well as intra-rater agreement between the first and second visits (minimally clothed). [Results] Eleven postural measures were ultimately analyzed for inter- and intra-rater agreement. Inter-rater agreement was almost perfect (ICC≥0.81) for four measures and substantial (0.60
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
NASA Astrophysics Data System (ADS)
Albayari, Diya'J.; Gobithaasan, R. U.; Miura, Kenjiro T.
2016-10-01
Cross and Cripps [2] approximated the generalized Cornu spiral (GCS) with a G 3 quintic Bezier curves based on a curvature error measure, the curvatures by means of parameterized arc length. However, this measure computationally expensive. In order to overcome this problem, Lu [6] suggested another error measure which reduces both time and computation time. However, Cross and Cripps's error measure was still needed in order assess its approximation quality. In this paper we propose a new approach to compute the error measure by making a correspondence between the general parameter t and arc length parameter s. Numerical examples show that this error measure reduce both time and computations to certain extend, and preserves the approximation quality as obtained by Cross and Cripps.
Modeling Active Beacon Collision Avoidance System (BCAS) Measurement Errors: An Empirical Approach.
1980-05-01
error modeling. As a result, only a small part of the data included theodolite measurements. The theodolite measurements were required to accurately...range error for a vertically maneuvering intruder. However, theodolite (true) position measurements were not available nor were own aircraft...intruder data was patterned after the level-flight data analysis and is included in appendix B. "True" measurements ( theodolite measurements) and ECAS
Measure short separation for space debris based on radar angle error measurement information
NASA Astrophysics Data System (ADS)
Zhang, Yao; Wang, Qiao; Zhou, Lai-jian; Zhang, Zhuo; Li, Xiao-long
2016-11-01
With the increasingly frequent human activities in space, number of dead satellites and space debris has increased dramatically, bring greater risks to the available spacecraft, however, the current widespread use of measuring equipment between space target has a lot of problems, such as high development costs or the limited conditions of use. To solve this problem, use radar multi-target measure error information to the space, and combining the relationship between target and the radar station point of view, building horizontal distance decoding model. By adopting improved signal quantization digit, timing synchronization and outliers processing method, improve the measurement precision, satisfies the requirement of multi-objective near distance measurements, and the using efficiency is analyzed. By conducting the validation test, test the feasibility and effectiveness of the proposed methods.
Detecting bit-flip errors in a logical qubit using stabilizer measurements
Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.
2015-01-01
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Error analysis in the measurement of average power with application to switching controllers
NASA Technical Reports Server (NTRS)
Maisel, J. E.
1979-01-01
The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
A Unified Approach to Measurement Error and Missing Data: Details and Extensions
ERIC Educational Resources Information Center
Blackwell, Matthew; Honaker, James; King, Gary
2017-01-01
We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…
Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports
ERIC Educational Resources Information Center
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary
2014-01-01
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports
ERIC Educational Resources Information Center
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary
2014-01-01
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
A Unified Approach to Measurement Error and Missing Data: Overview and Applications
ERIC Educational Resources Information Center
Blackwell, Matthew; Honaker, James; King, Gary
2017-01-01
Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.
Significance of gauge line error in orifice measurement
Bowen, J.W.
1995-12-01
Pulsation induced gauge line amplification can cause errors in the recorded differential signal used to calculate flow. Its presence may be detected using dual transmitters (one connected at the orifice taps, the other at the end of the gauge lines) and comparing the relative peak to peak amplitudes. Its affect on recorded differential may be determined by averaging both signals with a PC based data acquisition and analysis system. Remedial action is recommended in all cases where amplification is detected. Use of close connect, full opening manifolds, is suggested to decouple the gauge lines` resonant frequency from that of the excitation`s, by positioning the recording device as close to the process signal`s origin as possible.
Machine tool 3D volumetric positioning error measurement under various thermal conditions
NASA Astrophysics Data System (ADS)
Svoboda, O.; Bach, P.; Liotto, G.; Wang, C.
2006-11-01
To manufacture good quality or accurate parts, the measurement and compensation of three dimensional volumetric positioning errors of a machine tool are very important. Using a conventional laser interferometer to measure the straightness and squareness errors is very difficult and time consuming. Recently, Optodyne has developed a laser vector technique for the measurement of 3D volumetric positioning errors, including 3 linear displacement errors, 6 straightness errors and 3 squareness errors in a very short time. Using this laser vector technique combine with the data obtained from a set of thermocouples placed at key locations of the machine tool structure, the relations between the machine temperature distribution and the 3D positioning errors can be measured and modeled. The results can be used to compensate the 3D volumetric positioning errors under various thermal conditions. Reported here are the definition of the 3D volumetric positioning errors; the basic theory and description of the laser vector technique; the temperature sensors and the laser vector technique measurement results obtained on a vertical CNC machining center under different spindle load, machine temperature and environmental temperature.
Sharing is caring? Measurement error and the issues arising from combining 3D morphometric datasets.
Fruciano, Carmelo; Celik, Mélina A; Butler, Kaylene; Dooley, Tom; Weisbecker, Vera; Phillips, Matthew J
2017-09-01
Geometric morphometrics is routinely used in ecology and evolution and morphometric datasets are increasingly shared among researchers, allowing for more comprehensive studies and higher statistical power (as a consequence of increased sample size). However, sharing of morphometric data opens up the question of how much nonbiologically relevant variation (i.e., measurement error) is introduced in the resulting datasets and how this variation affects analyses. We perform a set of analyses based on an empirical 3D geometric morphometric dataset. In particular, we quantify the amount of error associated with combining data from multiple devices and digitized by multiple operators and test for the presence of bias. We also extend these analyses to a dataset obtained with a recently developed automated method, which does not require human-digitized landmarks. Further, we analyze how measurement error affects estimates of phylogenetic signal and how its effect compares with the effect of phylogenetic uncertainty. We show that measurement error can be substantial when combining surface models produced by different devices and even more among landmarks digitized by different operators. We also document the presence of small, but significant, amounts of nonrandom error (i.e., bias). Measurement error is heavily reduced by excluding landmarks that are difficult to digitize. The automated method we tested had low levels of error, if used in combination with a procedure for dimensionality reduction. Estimates of phylogenetic signal can be more affected by measurement error than by phylogenetic uncertainty. Our results generally highlight the importance of landmark choice and the usefulness of estimating measurement error. Further, measurement error may limit comparisons of estimates of phylogenetic signal across studies if these have been performed using different devices or by different operators. Finally, we also show how widely held assumptions do not always hold true
Quantification of uncertainties in OCO-2 measurements of XCO2: simulations and linear error analysis
NASA Astrophysics Data System (ADS)
Connor, Brian; Bösch, Hartmut; McDuffie, James; Taylor, Tommy; Fu, Dejian; Frankenberg, Christian; O'Dell, Chris; Payne, Vivienne H.; Gunson, Michael; Pollock, Randy; Hobbs, Jonathan; Oyafuso, Fabiano; Jiang, Yibo
2016-10-01
We present an analysis of uncertainties in global measurements of the column averaged dry-air mole fraction of CO2 (XCO2) by the NASA Orbiting Carbon Observatory-2 (OCO-2). The analysis is based on our best estimates for uncertainties in the OCO-2 operational algorithm and its inputs, and uses simulated spectra calculated for the actual flight and sounding geometry, with measured atmospheric analyses. The simulations are calculated for land nadir and ocean glint observations. We include errors in measurement, smoothing, interference, and forward model parameters. All types of error are combined to estimate the uncertainty in XCO2 from single soundings, before any attempt at bias correction has been made. From these results we also estimate the "variable error" which differs between soundings, to infer the error in the difference of XCO2 between any two soundings. The most important error sources are aerosol interference, spectroscopy, and instrument calibration. Aerosol is the largest source of variable error. Spectroscopy and calibration, although they are themselves fixed error sources, also produce important variable errors in XCO2. Net variable errors are usually < 1 ppm over ocean and ˜ 0.5-2.0 ppm over land. The total error due to all sources is ˜ 1.5-3.5 ppm over land and ˜ 1.5-2.5 ppm over ocean.
Can the Dyskinesia Impairment Scale be used by inexperienced raters? A reliability study.
Monbaliu, Elegast; Ortibus, Els; Prinzie, Peter; Dan, Bernard; De Cat, Josse; De Cock, Paul; Feys, Hilde
2013-05-01
The Dyskinesia Impairment Scale (DIS) is a new scale for measuring dystonia and choreoathetosis in dyskinetic Cerebral Palsy (CP). Previously, reliability of this scale has only been assessed for raters highly experienced in discriminating between dystonia and choreoathetosis. The aims of this study are to examine the reliability of the DIS used by inexperienced raters, new to discriminating between dystonia and choreoathetosis and to determine the effect of clinical expertise on reliability. Twenty-five patients (17 males; 8 females; age range 5-22 years; mean age = 13 years 6 months; SD = 5 years 4 months) with dyskinetic CP were filmed with the DIS standard video protocol. Two junior physiotherapists (PTs) and three senior PTs, all of whom were new to discriminating between dystonia and choreoathetosis, were trained in scoring the DIS. Afterward, they independently scored all patients from the video recordings using the DIS. Reliability was assessed by (1) Intraclass Correlation Coefficient (ICC), (2) Standard Error of Measurement (SEM) and Minimal Detectable Difference (MDD) and (3) Cronbach's alpha for internal consistency. Interrater reliability for the total DIS, and for the dystonia and choreoathetosis subscales was good for the junior PTs and moderately high to excellent for the senior PTs. SEM and MDD values for the total DIS were 6% and 15% respectively for the junior PTs and 4% and 12% respectively for the senior PTs. Cronbach's alpha ranged between 0.87 and 0.95 for the junior PTs and between 0.76 and 0.93 for the senior PTs. Reliability of the DIS scores for the inexperienced junior and senior PTs was sufficient in comparison with scores from the experienced raters in the previous study, indicating that the DIS can be used by inexperienced PTs new to discriminating between dystonia and choreoathetosis, and also that its reliability is not dependent on clinical expertise. However, based on the measurement errors and questionnaire data, familiarity
Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi
2012-12-20
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.
Compensation method for the alignment angle error in pitch deviation measurement
NASA Astrophysics Data System (ADS)
Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei
2016-05-01
When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.
Linnet, K
1990-12-01
The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model that takes into account a proportional relationship between standard deviations of error distributions and true variable levels. Weights are estimated by an interative procedure. As shown by simulations, the regression procedure yields practically unbiased slope estimates in realistic situations. Standard errors of slope and location difference estimations are derived by the jackknife principle. For illustration, the linear relationship is estimated between the measurements of two albumin methods with proportional errors.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Zhu, Minhao; Wei, Haoyun; Wu, Xuejian; Li, Yan
2014-08-01
Periodic error is the major problem that limits the accuracy of heterodyne interferometry. A traceable system for periodic error measurement is developed based on a nonlinearity free Fabry-Perot (F-P) interferometer. The displacement accuracy of the F-P interferometer is 0.49 pm at 80 ms averaging time, with the measurement results referenced to an optical frequency comb. Experimental comparison between the F-P interferometer and a commercial heterodyne interferometer is carried out and it shows that the first harmonic periodic error dominates in the commercial heterodyne interferometer with an error amplitude of 4.64 nm.
Study on the method of roundness error measurement based on GPS operation technology
NASA Astrophysics Data System (ADS)
Qing, Kewei; Zhang, Linna; Zheng, Peng; Miao, Xiaodan
2006-11-01
With the development of the measurement techniques, especially the precision and ultra-precision measurement techniques, the measurement process of the geometric error based on the old GPS (geometrical product specification and verification) technology can hardly meet the requirement of high accuracy and efficiency. A new GPS system which was introduced by ISO/TC213 and based on metrology, has unified the whole process of product's specification and verification, and also realizes the digitized and standardized measurement. It provides a new approach to the measurement of the geometrical error by using the operation and operator technologies. In this paper the application of the operation technology in roundness error measurement is described in detail, moreover, the method and the key technique of realizing the process are also put forward. Lastly, by designing the appropriate verification operator of roundness error, the measurement process is specified and the measurement uncertainty is also decreased.
Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea
NASA Astrophysics Data System (ADS)
Shin, S.; Kim, Y.; Jung, C.
2010-12-01
The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
Influencing factors and error analysis for specular gloss measurement
NASA Astrophysics Data System (ADS)
Li, Tiecheng; Shi, Leibing; Lai, Lei; Lin, Fangsheng; Yin, Dejin; Xia, Ming; Wu, Limin
2016-09-01
Specular gloss has been widely used to characterize the ability of a surface to reflect light specularly. Specular gloss is theoretically related to the physical properties of a surface, such as roughness, directionality and uniformity. Specular gloss, mainly determined by incident angle and refractive index of a surface, is a relative measurement quantity. Specular gloss is usually measured by a glossmeter. The topographical and optical properties of a surface have been analyzed on how to affect the measurements. The experiment results indicate that a less rough/flatter, more isotropic and more uniform surface will result in a more accurate measurement value. Therefore, physical properties of a surface must be carefully inspected before the specular gloss measurement in order to acquire a satisfied result.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.
Gas hydrate estimation error associated with uncertainties of measurements and parameters
Lee, Myung W.; Collett, Timothy S.
2001-01-01
Downhole log measurements such as acoustic or electrical resistivity logs are often used to estimate in situ gas hydrate concentrations in sediment pore space. Estimation errors owing to uncertainties associated with downhole measurements and the parameters for estimation equations (weight in the acoustic method and Archie?s parameters in the resistivity method) are analyzed in order to assess the accuracy of estimation of gas hydrate concentration. Accurate downhole measurements are essential for accurate estimation of the gas hydrate concentrations in sediments, particularly at low gas hydrate concentrations and when using acoustic data. Estimation errors owing to measurement errors, except the slowness error, decrease as the gas hydrate concentration increases and as porosity increases. Estimation errors owing to uncertainty in the input parameters are small in the acoustic method and may be signifi cant in the resistivity method at low gas hydrate concentrations.
Quantization Error Reduction in the Measurement of Fourier Intensity for Phase Retrieval
NASA Astrophysics Data System (ADS)
Yang, Shiyuan; Takajo, Hiroaki
2004-08-01
The quantization error in the measurement of Fourier intensity for phase retrieval is discussed and a multispectra method is proposed to reduce this error. The Fourier modulus used for phase retrieval is usually obtained by measuring Fourier intensity with a digital device. Therefore, quantization error in the measurement of Fourier intensity leads to an error in the reconstructed object when iterative Fourier transform algorithms are used. The multispectra method uses several Fourier intensity distributions for a number of measurement ranges to generate a Fourier intensity distribution with a low quantization error. Simulations show that the multispectra method is effective in retrieving objects with real or complex distributions when the iterative hybrid input-output algorithm (HIO) is used.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
NASA Astrophysics Data System (ADS)
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
A semiparametric copula method for Cox models with covariate measurement error.
Kim, Sehee; Li, Yi; Spiegelman, Donna
2016-01-01
We consider measurement error problem in the Cox model, where the underlying association between the true exposure and its surrogate is unknown, but can be estimated from a validation study. Under this framework, one can accommodate general distributional structures for the error-prone covariates, not restricted to a linear additive measurement error model or Gaussian measurement error. The proposed copula-based approach enables us to fit flexible measurement error models, and to be applicable with an internal or external validation study. Large sample properties are derived and finite sample properties are investigated through extensive simulation studies. The methods are applied to a study of physical activity in relation to breast cancer mortality in the Nurses' Health Study.
Calibration for the errors resulted from aberration in long focal length measurement
NASA Astrophysics Data System (ADS)
Yao, Jiang; Luo, Jia; He, Fan; Bai, Jian; Wang, Kaiwei; Hou, Xiyun; Hou, Changlun
2014-09-01
In this paper, a high-accuracy calibration method for errors resulted from aberration in long focal length measurement, is presented. Generally, Gaussian Equation is used for calculation without consideration of the errors caused by aberration. However, the errors are the key factor affecting the accuracy in the measurement system of a large aperture and long focal length lens. We creatively introduce an effective way to calibrate the errors, with detailed analysis of the long focal length measurement based on divergent light and Talbot interferometry. Aberration errors are simulated by Zemax. Then, we achieve auto-correction with the help of Visual C++ software and the experimental results reveal that the relative accuracy is better than 0.01%.By comparing modified values with experimental results obtained in knife-edge testing measurement, the proposed method is proved to be highly effective and reliable.
Thomas, Laine; Stefanski, Leonard A.; Davidian, Marie
2013-01-01
In clinical studies, covariates are often measured with error due to biological fluctuations, device error and other sources. Summary statistics and regression models that are based on mismeasured data will differ from the corresponding analysis based on the “true” covariate. Statistical analysis can be adjusted for measurement error, however various methods exhibit a tradeo between convenience and performance. Moment Adjusted Imputation (MAI) is method for measurement error in a scalar latent variable that is easy to implement and performs well in a variety of settings. In practice, multiple covariates may be similarly influenced by biological fluctuastions, inducing correlated multivariate measurement error. The extension of MAI to the setting of multivariate latent variables involves unique challenges. Alternative strategies are described, including a computationally feasible option that is shown to perform well. PMID:24072947
Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements
NASA Astrophysics Data System (ADS)
Deeg, H. J.
2015-06-01
Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.
Detection and measurement of rotator cuff tears with sonography: analysis of diagnostic errors.
Teefey, Sharlene A; Middleton, William D; Payne, William T; Yamaguchi, Ken
2005-06-01
The purpose of this study was to analyze the causes of errors in the detection and measurement of rotator cuff tears in our patient population. Seventy-one consecutive patients with shoulder pain who were prospectively studied with sonography had subsequent arthroscopy that showed a full-thickness or partial-thickness tear or intact cuff. For sonography and arthroscopy, the length or degree of retraction and width of a tear, when present, was recorded. When there were discrepant findings, representative images were jointly evaluated by the radiologist and orthopedic surgeon to determine the cause of the error. Fifteen detection errors were found, including five misses (three < 5-mm subscapularis and two small partial-thickness tears), four errors inherent with the test (distinguishing large bursal side or extensive partial-thickness from full-thickness tears and tendinopathy from partial-thickness tears), three errors of an unknown cause, two due to misinterpretation, and one error inherent with the patient. Seventeen measurement errors occurred with full-thickness tears, 15 of those in patients with large or massive tears. Bursal thickening (n = 4), non-visualization of the torn tendon end (n = 2), nonretracted tear (n = 2), and complex tear (n = 1) contributed to the errors. Eight measurement errors occurred with partial-thickness tears. Difficulty distinguishing tendinopathy from partial-thickness tears (n = 3) and complex tears (n = 3) accounted for six errors. Although infrequent, detection errors were due to limitations inherent with the test or misses. Limitations inherent with the patient and misinterpretation of the findings were rare. Most measurement errors occurred in patients with large or massive cuff tears.
Mints, M.Ya.; Chinkov, V.N.
1995-09-01
Rational algorithms for measuring the harmonic coefficient in microprocessor instruments for measuring nonlinear distortions based on digital processing of the codes of the instantaneous values of the signal being investigated are described and the errors of such instruments are obtained.
The effect of proficiency level on measurement error of range of motion
Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari
2016-01-01
[Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712
The effect of proficiency level on measurement error of range of motion.
Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari
2016-09-01
[Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators' proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements.
Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys
NASA Astrophysics Data System (ADS)
Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.
2016-12-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Error analysis of integrated water vapor measured by CIMEL photometer
NASA Astrophysics Data System (ADS)
Berezin, I. A.; Timofeyev, Yu. M.; Virolainen, Ya. A.; Frantsuzova, I. S.; Volkova, K. A.; Poberovsky, A. V.; Holben, B. N.; Smirnov, A.; Slutsker, I.
2017-01-01
Water vapor plays a key role in weather and climate forming, which leads to the need for continuous monitoring of its content in different parts of the Earth. Intercomparison and validation of different methods for integrated water vapor (IWV) measurements are essential for determining the real accuracies of these methods. CIMEL photometers measure IWV at hundreds of ground-based stations of the AERONET network. We analyze simultaneous IWV measurements performed by a CIMEL photometer, an RPG-HATPRO MW radiometer, and a FTIR Bruker 125-HR spectrometer at the Peterhof station of St. Petersburg State University. We show that the CIMEL photometer calibrated by the manufacturer significantly underestimates the IWV obtained by other devices. We may conclude from this intercomparison that it is necessary to perform an additional calibration of the CIMEL photometer, as well as a possible correction of the interpretation technique for CIMEL measurements at the Peterhof site.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements.
Sedlak, Steffen M; Bruetzel, Linda K; Lipfert, Jan
2017-04-01
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ(2)(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
2017-01-01
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors. PMID:28381982
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
Errors and uncertainties in the measurement of ultrasonic wave attenuation and phase velocity.
Kalashnikov, Alexander N; Challis, Richard E
2005-10-01
This paper presents an analysis of the error generation mechanisms that affect the accuracy of measurements of ultrasonic wave attenuation coefficient and phase velocity as functions of frequency. In the first stage of the analysis we show that electronic system noise, expressed in the frequency domain, maps into errors in the attenuation and the phase velocity spectra in a highly nonlinear way; the condition for minimum error is when the total measured attenuation is around 1 Neper. The maximum measurable total attenuation has a practical limit of around 6 Nepers and the minimum measurable value is around 0.1 Neper. In the second part of the paper we consider electronic noise as the primary source of measurement error; errors in attenuation result from additive noise whereas errors in phase velocity result from both additive noise and system timing jitter. Quantization noise can be neglected if the amplitude of the additive noise is comparable with the quantization step, and coherent averaging is employed. Experimental results are presented which confirm the relationship between electronic noise and measurement errors. The analytical technique is applicable to the design of ultrasonic spectrometers, formal assessment of the accuracy of ultrasonic measurements, and the optimization of signal processing procedures to achieve a specified accuracy.
Tosteson, Tor D; Buzas, Jeffrey S; Demidenko, Eugene; Karagas, Margaret
2003-04-15
Covariate measurement error is often a feature of scientific data used for regression modelling. The consequences of such errors include a loss of power of tests of significance for the regression parameters corresponding to the true covariates. Power and sample size calculations that ignore covariate measurement error tend to overestimate power and underestimate the actual sample size required to achieve a desired power. In this paper we derive a novel measurement error corrected power function for generalized linear models using a generalized score test based on quasi-likelihood methods. Our power function is flexible in that it is adaptable to designs with a discrete or continuous scalar covariate (exposure) that can be measured with or without error, allows for additional confounding variables and applies to a broad class of generalized regression and measurement error models. A program is described that provides sample size or power for a continuous exposure with a normal measurement error model and a single normal confounder variable in logistic regression. We demonstrate the improved properties of our power calculations with simulations and numerical studies. An example is given from an ongoing study of cancer and exposure to arsenic as measured by toenail concentrations and tap water samples.
Ambient Temperature Changes and the Impact to Time Measurement Error
NASA Astrophysics Data System (ADS)
Ogrizovic, V.; Gucevic, J.; Delcev, S.
2012-12-01
Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.
Mean-square error due to gradiometer field measuring devices.
Hatsell, C P
1991-06-01
Gradiometers use spatial common mode magnetic field rejection to reduce interference from distant sources. They also introduce distortion that can be severe, rendering experimental data difficult to interpret. Attempts to recover the measured magnetic field from the gradiometer output will be plagued by the nonexistence of a spatial function for deconvolution (except for first-order gradiometers), and by the high-pass nature of the spatial transform that emphasizes high spatial frequency noise. Goals of a design for a facility for measuring biomagnetic fields should be an effective shielded room and a field detector employing a first-order gradiometer.
Defining uncertainty and error in planktic foraminiferal oxygen isotope measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C. M.
2017-02-01
Foraminifera are the backbone of paleoceanography. Planktic foraminifera are one of the leading tools for reconstructing water column structure. However, there are unconstrained variables when dealing with uncertainty in the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate uncertainty in oxygen isotope measurements. FIRM uses parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects to produce synthetic isotope data in a manner reflecting natural processes. Reproducibility is then tested using Monte Carlo simulations. Importantly, this is not an attempt to fully model the entire complicated process of foraminiferal calcification; instead, we are trying to include only enough parameters to estimate the uncertainty in foraminiferal δ18O records. Two well-constrained empirical data sets are simulated successfully, demonstrating the validity of our model. The results from a series of experiments with the model show that reproducibility is not only largely controlled by the number of individuals in each measurement but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. FIRM is a tool to estimate isotopic uncertainty values and to explore the impact of myriad factors on the fidelity of paleoceanographic records, particularly for the Holocene.
Measurement, Sampling, and Equating Errors in Large-Scale Assessments
ERIC Educational Resources Information Center
Wu, Margaret
2010-01-01
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Falavarjani, Khalil Ghasemi; Mehrpuya, Amirabbas; Amirkourjani, Foad
2017-02-01
To evaluate the effect of Topcon spectral domain optical coherence tomography (OCT) image quality on macular thickness measurements and the error rate in healthy subjects and patients with clinically significant diabetic macular edema (CSME). In this prospective, comparative case series, macular thickness measurements, and the rate of decentration and segmentation errors were evaluated before and after reducing the image quality factor (QF). The measurements were evaluated again after correcting the decentration and segmentation errors. To reduce the image QF below 45, tetracycline eye ointment was applied on the corneal surface. Forty eyes of 40 subjects including 18 healthy eyes and 22 eyes with CSME were included. In both groups, the difference in central subfield thickness measurements before and after reducing the image QF was not statistically significant both before and after error correction (all P>0.05). The rate of decentration error was statistically similar before and after reducing image QF in normal and CSME eyes (P=0.50, P=0.69, respectively). However, the rate of segmentation error was statistically significantly higher after reducing image QF both in normal and CSME eyes (P=0.008 and P=0.004, respectively). In both groups, eyes with a segmentation error had higher image QF reduction (both P=0.01). Reducing image quality results in a higher rate of the segmentation error in normal eyes and in eyes with CSME.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Nonparametric variance estimation in the analysis of microarray data: a measurement error approach.
Carroll, Raymond J; Wang, Yuedong
2008-01-01
This article investigates the effects of measurement error on the estimation of nonparametric variance functions. We show that either ignoring measurement error or direct application of the simulation extrapolation, SIMEX, method leads to inconsistent estimators. Nevertheless, the direct SIMEX method can reduce bias relative to a naive estimator. We further propose a permutation SIMEX method which leads to consistent estimators in theory. The performance of both SIMEX methods depends on approximations to the exact extrapolants. Simulations show that both SIMEX methods perform better than ignoring measurement error. The methodology is illustrated using microarray data from colon cancer patients.
Position error correction in absolute surface measurement based on a multi-angle averaging method
NASA Astrophysics Data System (ADS)
Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin
2017-04-01
We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.
Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.
Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith
2013-09-01
Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.
Using surrogate biomarkers to improve measurement error models in nutritional epidemiology.
Keogh, Ruth H; White, Ian R; Rodwell, Sheila A
2013-09-30
Nutritional epidemiology relies largely on self-reported measures of dietary intake, errors in which give biased estimated diet-disease associations. Self-reported measurements come from questionnaires and food records. Unbiased biomarkers are scarce; however, surrogate biomarkers, which are correlated with intake but not unbiased, can also be useful. It is important to quantify and correct for the effects of measurement error on diet-disease associations. Challenges arise because there is no gold standard, and errors in self-reported measurements are correlated with true intake and each other. We describe an extended model for error in questionnaire, food record, and surrogate biomarker measurements. The focus is on estimating the degree of bias in estimated diet-disease associations due to measurement error. In particular, we propose using sensitivity analyses to assess the impact of changes in values of model parameters which are usually assumed fixed. The methods are motivated by and applied to measures of fruit and vegetable intake from questionnaires, 7-day diet diaries, and surrogate biomarker (plasma vitamin C) from over 25000 participants in the Norfolk cohort of the European Prospective Investigation into Cancer and Nutrition. Our results show that the estimated effects of error in self-reported measurements are highly sensitive to model assumptions, resulting in anything from a large attenuation to a small amplification in the diet-disease association. Commonly made assumptions could result in a large overcorrection for the effects of measurement error. Increased understanding of relationships between potential surrogate biomarkers and true dietary intake is essential for obtaining good estimates of the effects of measurement error in self-reported measurements on observed diet-disease associations. Copyright © 2013 John Wiley & Sons, Ltd.
Using surrogate biomarkers to improve measurement error models in nutritional epidemiology
Keogh, Ruth H; White, Ian R; Rodwell, Sheila A
2013-01-01
Nutritional epidemiology relies largely on self-reported measures of dietary intake, errors in which give biased estimated diet–disease associations. Self-reported measurements come from questionnaires and food records. Unbiased biomarkers are scarce; however, surrogate biomarkers, which are correlated with intake but not unbiased, can also be useful. It is important to quantify and correct for the effects of measurement error on diet–disease associations. Challenges arise because there is no gold standard, and errors in self-reported measurements are correlated with true intake and each other. We describe an extended model for error in questionnaire, food record, and surrogate biomarker measurements. The focus is on estimating the degree of bias in estimated diet–disease associations due to measurement error. In particular, we propose using sensitivity analyses to assess the impact of changes in values of model parameters which are usually assumed fixed. The methods are motivated by and applied to measures of fruit and vegetable intake from questionnaires, 7-day diet diaries, and surrogate biomarker (plasma vitamin C) from over 25000 participants in the Norfolk cohort of the European Prospective Investigation into Cancer and Nutrition. Our results show that the estimated effects of error in self-reported measurements are highly sensitive to model assumptions, resulting in anything from a large attenuation to a small amplification in the diet–disease association. Commonly made assumptions could result in a large overcorrection for the effects of measurement error. Increased understanding of relationships between potential surrogate biomarkers and true dietary intake is essential for obtaining good estimates of the effects of measurement error in self-reported measurements on observed diet–disease associations. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23553407
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Intrinsic measurement errors for the speed of light in vacuum
NASA Astrophysics Data System (ADS)
Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.
2017-09-01
The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.
Proxies and Other External Raters: Methodological Considerations
Snow, A Lynn; Cook, Karon F; Lin, Pay-Shin; Morgan, Robert O; Magaziner, Jay
2005-01-01
Objective The purpose of this paper is to introduce researchers to the measurement and subsequent analysis considerations involved when using externally rated data. We will define and describe two categories of externally rated data, recommend methodological approaches for analyzing and interpreting data in these two categories, and explore factors affecting agreement between self-rated and externally rated reports. We conclude with a discussion of needs for future research. Data Sources/Study Setting Data sources for this paper are previous published studies and reviews comparing self-rated with externally rated data. Study Design/Data Collection/Extraction Methods This is a psychometric conceptual paper. Principal Findings We define two types of externally rated data: proxy data and other-rated data. Proxy data refer to those collected from someone who speaks for a patient who cannot, will not, or is unavailable to speak for him or herself, whereas we use the term other-rater data to refer to situations in which the researcher collects ratings from a person other than the patient to gain multiple perspectives on the assessed construct. These two types of data differ in the way the measurement model is defined, the definition of the gold standard against which the measurements are validated, the analysis strategies appropriately used, and how the analyses are interpreted. There are many factors affecting the discrepancies between self- and external ratings, including characteristics of the patient, the proxy, and of the rated construct. Several psychological theories can be helpful in predicting such discrepancies. Conclusions Externally rated data have an important place in health services research, but use of such data requires careful consideration of the nature of the data and how it will be analyzed and interpreted. PMID:16179002
Testing the plausibility of several a priori assumed error distributions for discharge measurements
NASA Astrophysics Data System (ADS)
Van Eerdenbrugh, Katrien; Verhoest, Niko E. C.
2017-04-01
Hydrologic measurements are used for a variety of research topics and operational projects. Regardless of the application, it is important to account for measurement uncertainty. In many projects, no local information is available about this uncertainty. Therefore, error distributions and accompanying parameters or uncertainty boundaries are often taken from literature without any knowledge about their applicability in the new context. In this research, an approach is proposed that uses relative differences between simultaneous discharge measurements to test the plausibility of several a priori assumed error distributions. For this test, simultaneous discharge measurements (measured with one type of device) from nine different Belgian rivers were available. This implies the assumption that their error distribution does not depend upon river, measurement location and measurement team. Moreover, it is assumed that the errors of two simultaneous measurements are not mutually dependent. This data set does not allow for a direct assessment of measurement errors. However, independently of the value of the real discharge, the relative difference between two simultaneous measurements can be expressed by their relative measurement errors. If a distribution is assumed for these errors, it is thus possible to test equality between the distributions of both the relative differences of the simultaneously measured discharge pairs and a created set of relative differences based on two equally sized samples of measurement errors from the assumed distribution. If the assumed error distribution is correct, these two data sets will have the same distribution. In this research, equality is tested with a two-sample nonparametric Kolmogorov-Smirnov test. The resulting p-value and the corresponding value of the Kolmogorov-Smirnov statistic (KS statistic) are used for this evaluation. The occurrence of a high p-value (and corresponding small value of the KS statistic) provides no
Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements
NASA Astrophysics Data System (ADS)
Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.
2012-12-01
This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.
Shared uncertainty in measurement error problems, with application to Nevada Test Site fallout data.
Li, Yehua; Guolo, Annamaria; Hoffman, F Owen; Carroll, Raymond J
2007-12-01
In radiation epidemiology, it is often necessary to use mathematical models in the absence of direct measurements of individual doses. When complex models are used as surrogates for direct measurements to estimate individual doses that occurred almost 50 years ago, dose estimates will be associated with considerable error, this error being a mixture of (a) classical measurement error due to individual data such as diet histories and (b) Berkson measurement error associated with various aspects of the dosimetry system. In the Nevada Test Site(NTS) Thyroid Disease Study, the Berkson measurement errors are correlated within strata. This article concerns the development of statistical methods for inference about risk of radiation dose on thyroid disease, methods that account for the complex error structure inherence in the problem. Bayesian methods using Markov chain Monte Carlo and Monte-Carlo expectation-maximization methods are described, with both sharing a key Metropolis-Hastings step. Regression calibration is also considered, but we show that regression calibration does not use the correlation structure of the Berkson errors. Our methods are applied to the NTS Study, where we find a strong dose-response relationship between dose and thyroiditis. We conclude that full consideration of mixtures of Berkson and classical uncertainties in reconstructed individual doses are important for quantifying the dose response and its credibility/confidence interval. Using regression calibration and expectation values for individual doses can lead to a substantial underestimation of the excess relative risk per gray and its 95% confidence intervals.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study
NASA Astrophysics Data System (ADS)
Bogren, W.; Kylling, A.; Burkhart, J. F.
2015-12-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Measurement error associated with surveys of fish abundance in Lake Michigan
Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.
2002-01-01
In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.
Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System
NASA Technical Reports Server (NTRS)
Pfenninger, W. Matthew; Papen, George C.
1992-01-01
Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.
Eccentricity error identification and compensation for high-accuracy 3D optical measurement.
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2013-07-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation.
Violation of Heisenberg's error-disturbance uncertainty relation in neutron-spin measurements
NASA Astrophysics Data System (ADS)
Sulyok, Georg; Sponar, Stephan; Erhart, Jacqueline; Badurek, Gerald; Ozawa, Masanao; Hasegawa, Yuji
2013-08-01
In its original formulation, Heisenberg's uncertainty principle dealt with the relationship between the error of a quantum measurement and the thereby induced disturbance on the measured object. Meanwhile, Heisenberg's heuristic arguments have turned out to be correct only for special cases. An alternative universally valid relation was derived by Ozawa in 2003. Here, we demonstrate that Ozawa's predictions hold for projective neutron-spin measurements. The experimental inaccessibility of error and disturbance claimed elsewhere has been overcome using a tomographic method. By a systematic variation of experimental parameters in the entire configuration space, the physical behavior of error and disturbance for projective spin-(1)/(2) measurements is illustrated comprehensively. The violation of Heisenberg's original relation, as well as the validity of Ozawa's relation become manifest. In addition, our results conclude that the widespread assumption of a reciprocal relation between error and disturbance is not valid in general.
Error Measurements in an Acousto-Optic Tunable Filter Fiber Bragg Grating Sensor System
1994-05-01
Acousto - Optic Tunable Filter--Fiber Bragg Grating (AOTF-FBG) system. This analysis was targeted to investigate the measurement error in the AOTF-FBG system...Fiber bragg grating, Wavelength division multiplexing, Acousto - optic tunable filter.
NASA Astrophysics Data System (ADS)
Hofbauer, E.; Rascher, R.; Friedke, F.; Kometer, R.
2017-06-01
The basic physical measurement principle in DaOS is the vignettation of a quasi-parallel light beam emitted by an expanded light source in auto collimation arrangement. The beam is reflected by the surface under test, using invariant deflection by a moving and scanning pentaprism. Thereby nearly any curvature of the specimen is measurable. Resolution, systematic errors and random errors will be shown and explicitly discussed for the profile determination error. Measurements for a "plano-double-sombrero" device will be analyzed and reconstructed to find out the limit of resolution and errors of the reconstruction model and algorithms. These measurements are compared critically to reference results that are recorded by interferometry and Deflectometric Flatness Reference (DFR) method using a scanning penta device.
Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy
NASA Technical Reports Server (NTRS)
Hoenk, M. E.
1994-01-01
Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.
ERIC Educational Resources Information Center
Katch, Frank I.; Katch, Victor L.
1980-01-01
Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)
A comparison between traditional and measurement-error growth models for weakfish Cynoscion regalis
Jiao, Yan
2016-01-01
Inferring growth for aquatic species is dependent upon accurate descriptions of age-length relationships, which may be degraded by measurement error in observed ages. Ageing error arises from biased and/or imprecise age determinations as a consequence of misinterpretation by readers or inability of ageing structures to accurately reflect true age. A Bayesian errors-in-variables (EIV) approach (i.e., measurement-error modeling) can account for ageing uncertainty during nonlinear growth curve estimation by allowing observed ages to be parametrically modeled as random deviates. Information on the latent age composition then comes from the specified prior distribution, which represents the true age structure of the sampled fish population. In this study, weakfish growth was modeled by means of traditional and measurement-error von Bertalanffy growth curves using otolith- or scale-estimated ages. Age determinations were assumed to be log-normally distributed, thereby incorporating multiplicative error with respect to ageing uncertainty. The prior distribution for true age was assumed to be uniformly distributed between ±4 of the observed age (yr) for each individual. Measurement-error growth models described weakfish that reached larger sizes but at slower rates, with median length-at-age being overestimated by traditional growth curves for the observed age range. In addition, measurement-error models produced slightly narrower credible intervals for parameters of the von Bertalanffy growth function, which may be an artifact of the specified prior distributions. Subjectivity is always apparent in the ageing of fishes and it is recommended that measurement-error growth models be used in conjunction with otolith-estimated ages to accurately capture the age-length relationship that is subsequently used in fisheries stock assessment and management. PMID:27688963
Image pre-filtering for measurement error reduction in digital image correlation
NASA Astrophysics Data System (ADS)
Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing
2015-02-01
In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
On the errors in measuring the particle density by the light absorption method
Ochkin, V. N.
2015-04-15
The accuracy of absorption measurements of the density of particles in a given quantum state as a function of the light absorption coefficient is analyzed. Errors caused by the finite accuracy in measuring the intensity of the light passing through a medium in the presence of different types of noise in the recorded signal are considered. Optimal values of the absorption coefficient and the factors capable of multiplying errors when deviating from these values are determined.
Phase-modulation method for AWG phase-error measurement in the frequency domain.
Takada, Kazumasa; Hirose, Tomohiro
2009-12-15
We report a phase-modulation method for measuring arrayed waveguide grating (AWG) phase error in the frequency domain. By combining the method with a digital sampling technique that we have already reported, we can measure the phase error within an accuracy of +/-0.055 rad for the center 90% waveguides in the array even when no carrier frequencies are generated in the beat signal from the interferometer.
NASA Astrophysics Data System (ADS)
Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.
1994-06-01
Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.
NASA Technical Reports Server (NTRS)
Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.
1994-01-01
Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.
The effect of awareness of measurement error on physical therapists' confidence in their decisions.
Hayes, K W
1992-07-01
This study examined whether physical therapists understand the meaning of measurement error and whether information about measurement error affects their decisions. One of four versions of two physical therapy problems was mailed to 500 randomly selected physical therapists. Therapists were asked to define reliability and error of measurement, to estimate the error of measurement of two assessments, and to make decisions about an intervention based on specific measurements. They were also asked to rate their confidence in those decisions. Problems varied on the presence or absence of measurement information and on the difference between an observed measurement and a criterion measurement against which the observed measurement must be compared to make a decision. The response rate was 62%; respondents represented a typical profile of practicing physical therapists. The therapists understood reliability, but they did not correctly describe the relationship between reliability and error of measurement. Their estimates of the error of measurement of the two assessments were reasonable for only one procedure. The presence or absence of measurement information and difference between observed and criterion measurements affected their confidence, albeit inappropriately, in only one problem. Confidence was not affected by the therapists' level of experience, type of reading, formal study, or degree earned. Therapists responded to the two problems differently. The problems involved different measures, roles, utilities, and structures. The process of decision making does not generalize to all decision types. Measurement principles and strategies of use in decision making must be emphasized in physical therapy curricula so that physical therapists can consider the quality of their assessment data in making clinical decisions.
Almeida, Gustavo J.; Schroeder, Carolyn A.; Gil, Alexandra B.; Fitzgerald, G. Kelley; Piva, Sara R.
2010-01-01
Objective 1) To determine the inter-rater reliability and measurement error of a 11-step stair ascend/descend test (STTotal-11) and stair up (ascend) test (STUp-11); 2) to seek evidence for the STTotal-11 and STUp-11 as valid measures of physical function by determining if they relate to measures of physical function and do not relate to measures not of physical function; and 3) to explore if the STTotal-11 and STUp-11 scores relate to lower extremity muscle weakness and knee range of motion (ROM) in individuals with total knee arthroplasty (TKA). Design Cross-sectional study. Setting Academic center. Participants Subjects (N=43, 30 women; mean age, 68±8years) with unilateral TKA. Interventions Not applicable. Main Outcome Measures STTotal-11 and STUp-11 were performed twice and scores were compared to scores on 4 lower extremity performance-based tasks, 2 patient-reported questionnaires of physical function, 3 psychological factors, knee ROM, and strength of quadriceps, hip extensors and abductors. Results Intraclass correlation coefficient was 0.94 for both the STTotal-11 and STUp-11, standard error of measurements were 1.14sec and .82sec, and Minimum Detectable Change associated with 90%CI were 2.6 sec and 1.9 sec, respectively. Correlations between stair tests and performance based measures and knee and hip muscle strength ranged from r=.40 to .78. STTotal-11 and STUp-11 had a small correlation with one of the patient-reported measures of physical function. Stair tests were not associated with psychological factors and knee extension ROM, and were associated with knee flexion ROM. Conclusions STTotal-11 and STUp-11 have good inter-rater reliability and MDCs adequate for clinical use. The pattern of associations supports the validity of the stair tests in TKA. PMID:20510986
Vasquez, Victor R; Whiting, Wallace B
2005-12-01
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.
Direct Behavior Rating: Considerations for Rater Accuracy
ERIC Educational Resources Information Center
Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.
2014-01-01
Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…
Functional and Structural Methods with Mixed Measurement Error and Misclassification in Covariates.
Yi, Grace Y; Ma, Yanyuan; Spiegelman, Donna; Carroll, Raymond J
2015-06-01
Covariate measurement imprecision or errors arise frequently in many areas. It is well known that ignoring such errors can substantially degrade the quality of inference or even yield erroneous results. Although in practice both covariates subject to measurement error and covariates subject to misclassification can occur, research attention in the literature has mainly focused on addressing either one of these problems separately. To fill this gap, we develop estimation and inference methods that accommodate both characteristics simultaneously. Specifically, we consider measurement error and misclassification in generalized linear models under the scenario that an external validation study is available, and systematically develop a number of effective functional and structural methods. Our methods can be applied to different situations to meet various objectives.
Is the Parkinson Anxiety Scale comparable across raters?
Forjaz, Maria João; Ayala, Alba; Martinez-Martin, Pablo; Dujardin, Kathy; Pontone, Gregory M; Starkstein, Sergio E; Weintraub, Daniel; Leentjens, Albert F G
2015-04-01
The Parkinson Anxiety Scale is a new scale developed to measure anxiety severity in Parkinson's disease specifically. It consists of three dimensions: persistent anxiety, episodic anxiety, and avoidance behavior. This study aimed to assess the measurement properties of the scale while controlling for the rater (self- vs. clinician-rated) effect. The Parkinson Anxiety Scale was administered to a cross-sectional multicenter international sample of 362 Parkinson's disease patients. Both patients and clinicians rated the patient's anxiety independently. A many-facet Rasch model design was applied to estimate and remove the rater effect. The following measurement properties were assessed: fit to the Rasch model, unidimensionality, reliability, differential item functioning, item local independency, interrater reliability (self or clinician), and scale targeting. In addition, test-retest stability, construct validity, precision, and diagnostic properties of the Parkinson Anxiety Scale were also analyzed. A good fit to the Rasch model was obtained for Parkinson Anxiety Scale dimensions A and B, after the removal of one item and rescoring of the response scale for certain items, whereas dimension C showed marginal fit. Self versus clinician rating differences were of small magnitude, with patients reporting higher anxiety levels than clinicians. The linear measure for Parkinson Anxiety Scale dimensions A and B showed good convergent construct with other anxiety measures and good diagnostic properties. Parkinson Anxiety Scale modified dimensions A and B provide valid and reliable measures of anxiety in Parkinson's disease that are comparable across raters. Further studies are needed with dimension C. © 2014 International Parkinson and Movement Disorder Society.
Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.
Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R
2015-01-02
Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. Copyright © 2014 Elsevier Ltd. All rights reserved.
Statistical and systematic errors in redshift-space distortion measurements from large surveys
NASA Astrophysics Data System (ADS)
Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.
2012-12-01
We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.
Examining rating scales using Rasch and Mokken models for rater-mediated assessments.
Wind, Stephanie A
2014-01-01
A variety of methods for evaluating the psychometric quality of rater-mediated assessments have been proposed, including rater effects based on latent trait models (e.g., Engelhard, 2013; Wolfe, 2009). Although information about rater effects contributes to the interpretation and use of rater-assigned scores, it is also important to consider ratings in terms of the structure of the rating scale on which scores are assigned. Further, concern with the validity of rater-assigned scores necessitates investigation of these quality control indices within student subgroups, such as gender, language, and race/ethnicity groups. Using a set of guidelines for evaluating the interpretation and use of rating scales adapted from Linacre (1999, 2004), this study demonstrates methods that can be used to examine rating scale functioning within and across student subgroups with indicators from Rasch measurement theory (Rasch, 1960) and Mokken scale analysis (Mokken, 1971). Specifically, this study illustrates indices of rating scale effectiveness based on Rasch models and models adapted from Mokken scaling, and considers whether the two approaches to evaluating the interpretation and use of rating scales lead to comparable conclusions within the context of a large-scale rater-mediated writing assessment. Major findings suggest that indices of rating scale effectiveness based on a parametric and nonparametric approach provide related, but slightly different, information about the structure of rating scales. Implications for research, theory, and practice are discussed.
Biggs, Adam T
2017-07-01
Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.
Experimental Test of Error-Disturbance Uncertainty Relations by Weak Measurement
NASA Astrophysics Data System (ADS)
Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi
2014-01-01
We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.
A first look at measurement error on FIA plots using blind plots in the Pacific Northwest
Susanna Melson; David Azuma; Jeremy S. Fried
2002-01-01
Measurement error in the Forest Inventory and Analysis work of the Pacific Northwest Station was estimated with a recently implemented blind plot measurement protocol. A small subset of plots was revisited by a crew having limited knowledge of the first crew's measurements. This preliminary analysis of the first 18 months' blind plot data indicates that...
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2010-12-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
NASA Astrophysics Data System (ADS)
Yang, Liangen; Wang, Xuanze; Lv, Wei
2011-05-01
A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.
The estimation error covariance matrix for the ideal state reconstructor with measurement noise
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.
de Araujo, T L; Arcuri, E A; Martins, E
1998-04-01
According to the International Council of Nurses the measurement of blood pressure is the procedure most performed by nurses in all the world. The aim of this study is to analyse the polemical aspects of instruments used in blood pressure measurement. Considering the analyses of the literature and the American Heart Association Recommendations, the main source of errors when measuring blood pressure are discussed.
How reproducibly can human ear ossicles be measured? A study of inter-observer error.
Flohr, Stefan; Leckelt, Jasmin; Kierdorf, Uwe; Kierdorf, Horst
2010-12-01
Ear ossicles have thus far received little attention in biological anthropology. For the use of these bones as a source of biological information, it is important to know how reproducibly they can be measured. We determined inter-observer errors for measurements recorded by two observers on mallei (N = 119) and incudes (N = 124) obtained from human skeletons recovered from an early medieval cemetery in southern Germany. Measurements were taken on-screen on images of the bones obtained with a digital microscope. In the case of separately acquired images, mean inter-observer error ranged between 0.50 and 9.59% (average: 2.63%) for malleus measurements and between 0.67 and 7.11% (average: 2.01%) for incus measurements. Coefficients of reliability ranged between 0.72 and 0.99 for the malleus measurements and between 0.61 and 0.98 for those of the incus. Except for one incus measurement, readings performed by the two observers on the same set of photographs produced lower inter-observer errors and higher coefficients of reliability than the method involving separate acquisition of images by the observers. Across all linear measurements, absolute inter-observer error was independent of the mean size of the measured variable for both bones. So far, studies on human ear ossicles have largely neglected the issue of measurement error and its potential implication for the interpretation of the data. Knowledge of measurement error is of special importance if results obtained by different researchers are combined into a single database. It is, therefore, suggested that the reproducibility of measurements should be addressed in all future studies of ear ossicles.
Henry, Sharon M.; Van Dillen, Linda R.; Trombley, Andrea R.; Dee, Justine M.; Bunn, Janice Y.
2013-01-01
Observational cross sectional study. To examine the inter-rater reliability of novice raters in using the Movement System Impairment (MSI) approach system and to explore the patterns of disagreement in classification errors. The inter-rater reliability of individual tests items used in the MSI approach is moderate to good; however, the reliability of the classification algorithm has been tested only preliminarily. Using previously recorded patient data (n = 21), 13 novice raters classified patients according to the MSI schema. The overall percent agreement using the kappa statistic as well as the agreement/disagreement among pair-wise comparisons in classification assignments were examined. There was an overall 87.4% agreement in the pairs of classification judgments with a kappa coefficient of 0.81 (95% CI: 0.79, 0.83). Raters were most likely to agree on the classification of Flexion (100%) and least likely to agree on the classification of Rotation (84%). The MSI classification algorithm can be learned by novice users and with training, their inter-rater reliability in applying the algorithm for classification judgments is good and similar to that reported in other studies. However, some degree of error persists in the classification decision-making associated with the MSI system, in particular for the Rotation category. PMID:22796388
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Phase error analysis and compensation considering ambient light for phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing
2014-04-01
The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.
The role of measurement error in estimating levels of physical activity.
Ferrari, Pietro; Friedenreich, Christine; Matthews, Charles E
2007-10-01
Epidemiologic studies have demonstrated that physical inactivity is an important determinant of numerous chronic diseases. However, self-reported estimates of physical activity contain measurement errors responsible for attenuating relative risk estimates. A validation study conducted in 2002-2003 at the Alberta Cancer Board (Canada) included a physical activity questionnaire, four 7-day physical activity logs, and four sets of accelerometer data from 154 study subjects (51% women) aged 35-65 years. The authors used a measurement error model to evaluate validity of the different types of physical activity assessment, and the attenuation factors, after taking into account error correlations between self-reported measurements. The validity coefficients, which express the correlation between measured and true exposure, were higher for accelerometers (0.81, 95% confidence interval (CI): 0.76, 0.85) compared with the physical activity log (0.57, 95% CI: 0.47, 0.66) and questionnaire measurements (0.26, 95% CI: 0.12, 0.40). The estimate of the attenuation factor for questionnaires was 0.13 (95% CI: 0.05, 0.23). Accuracy of physical activity questionnaire measurements was higher for men than for women, for younger individuals, and for those with a lower body mass index. Because the degree of attenuation in relative risk estimates is substantial, after the role of error correlations was considered, validation studies quantifying the impact of measurement errors on physical activity estimates are essential to evaluate the impact of physical inactivity on health.
Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.
2005-01-01
As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.
Analysis of handle dynamics-induced errors in hand biodynamic measurements
NASA Astrophysics Data System (ADS)
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2008-12-01
Reliable experimental data of the driving-point biodynamic response (DPBR) of the hand-arm system are required to develop better biodynamic models for several important applications. The objectives of this study are to enhance the understanding of mechanisms of errors induced via the dynamics of instrumented handles and to identify a relatively more reliable method for DPBR measurement. A model of the handle-hand-arm system was developed and applied to examine various measurement methods. Both analytical and finite element methods were used to perform the examinations. This study found that the handle dynamic response could cause an uneven vibration distribution on its structures, especially at high frequencies (⩾500 Hz), and hand coupling on the handle could influence the distribution characteristics. Whereas the uneven distribution itself could directly result in measurement error, the hand coupling-induced vibration changes could cause errors in tare mass cancellation. The essential reason for both types of error is that the acceleration measured at one point on the handle may not be the same as that distributed at other locations. Because the cap measurement method that separately measures the DPBRs distributed at the fingers and palm can minimize both types of error, it is the best one among the methods examined in this study. The theory developed in this study can be used to help select, develop, and improve the measurement method for a specific application.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.
Bonate, Peter L
2013-01-01
Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.
Nystrom, E.A.; Oberg, K.A.; Rehmann, C.R.; ,
2002-01-01
Acoustic Doppler current profilers (ADCPs) provide a promising method for measuring surface-water turbulence because they can provide data from a large spatial range in a relatively short time with relative ease. Some potential sources of errors in turbulence measurements made with ADCPs include inaccuracy of Doppler-shift measurements, poor temporal and spatial measurement resolution, and inaccuracy of multi-dimensional velocities resolved from one-dimensional velocities measured at separate locations. Results from laboratory measurements of mean velocity and turbulence statistics made with two pulse-coherent ADCPs in 0.87 meters of water are used to illustrate several of inherent sources of error in ADCP turbulence measurements. Results show that processing algorithms and beam configurations have important effects on turbulence measurements. ADCPs can provide reasonable estimates of many turbulence parameters; however, the accuracy of turbulence measurements made with commercially available ADCPs is often poor in comparison to standard measurement techniques.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
Muralikrishnan, B; Blackburn, C; Sawyer, D; Phillips, S; Bridges, R
2010-01-01
We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder's error map to improve the tracker's angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error.
Error reduction by combining strapdown inertial measurement units in a baseball stitch
NASA Astrophysics Data System (ADS)
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.