Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
Hypercorrection of high-confidence errors in the classroom.
Carpenter, Shana K; Haynes, Cynthia L; Corral, Daniel; Yeung, Kam Leung
2018-05-19
People often have erroneous knowledge about the world that is firmly entrenched in memory and endorsed with high confidence. Although strong errors in memory would seem difficult to "un-learn," evidence suggests that errors are more likely to be corrected through feedback when they are originally endorsed with high confidence compared to low confidence. This hypercorrection effect has been predominantly studied in laboratory settings with general knowledge (i.e., trivia) questions, however, and has not been systematically explored in authentic classroom contexts. In the current study, college students in an introductory horticulture class answered questions about the course content, rated their confidence in their answers, received feedback of the correct answers, and then later completed a posttest. Results revealed a significant hypercorrection effect, along with a tendency for students with higher prior knowledge of the material to express higher confidence in, and in turn more effective correction of, their error responses.
Neural basis for recognition confidence in younger and older adults.
Chua, Elizabeth F; Schacter, Daniel L; Sperling, Reisa A
2009-03-01
Although several studies have examined the neural basis for age-related changes in objective memory performance, less is known about how the process of memory monitoring changes with aging. The authors used functional magnetic resonance imaging to examine retrospective confidence in memory performance in aging. During low confidence, both younger and older adults showed behavioral evidence that they were guessing during recognition and that they were aware they were guessing when making confidence judgments. Similarly, both younger and older adults showed increased neural activity during low- compared to high-confidence responses in the lateral prefrontal cortex, anterior cingulate cortex, and left intraparietal sulcus. In contrast, older adults showed more high-confidence errors than younger adults. Younger adults showed greater activity for high compared to low confidence in medial temporal lobe structures, but older adults did not show this pattern. Taken together, these findings may suggest that impairments in the confidence-accuracy relationship for memory in older adults, which are often driven by high-confidence errors, may be primarily related to altered neural signals associated with greater activity for high-confidence responses.
Neural basis for recognition confidence in younger and older adults
Chua, Elizabeth F.; Schacter, Daniel L.; Sperling, Reisa A.
2008-01-01
Although several studies have examined the neural basis for age-related changes in objective memory performance, less is known about how the process of memory monitoring changes with aging. We used fMRI to examine retrospective confidence in memory performance in aging. During low confidence, both younger and older adults showed behavioral evidence that they were guessing during recognition, and that they were aware they were guessing when making confidence judgments. Similarly, both younger and older adults showed increased neural activity during low compared to high confidence responses in lateral prefrontal cortex, anterior cingulate cortex, and left intraparietal sulcus. In contrast, older adults showed more high confidence errors than younger adults. Younger adults showed greater activity for high compared to low confidence in medial temporal lobe structures, but older adults did not show this pattern. Taken together, these findings may suggest that impairments in the confidence-accuracy relationship for memory in older adults, which are often driven by high confidence errors, may be primarily related to altered neural signals associated with greater activity for high confidence responses. PMID:19290745
ERIC Educational Resources Information Center
Williams, David M.; Bergström, Zara; Grainger, Catherine
2018-01-01
Among neurotypical adults, errors made with high confidence (i.e. errors a person strongly believed they would not make) are corrected more reliably than errors made with low confidence. This 'hypercorrection effect' is thought to result from enhanced attention to information that reflects a 'metacognitive mismatch' between one's beliefs and…
The hypercorrection effect in younger and older adults.
Eich, Teal S; Stern, Yaakov; Metcalfe, Janet
2013-01-01
ABSTRACT The hypercorrection effect, which refers to the finding that errors committed with high confidence are more likely to be corrected than are low confidence errors, has been replicated many times, and with both young adults and children. In the present study, we contrasted older with younger adults. Participants answered general-information questions, made confidence ratings about their answers, were given corrective feedback, and then were retested on questions that they had gotten wrong. While younger adults showed the hypercorrection effect, older adults, despite higher overall accuracy on the general-information questions and excellent basic metacognitive ability, showed a diminished hypercorrection effect. Indeed, the correspondence between their confidence in their errors and the probability of correction was not significantly greater than zero, showing, for the first time, that a particular participant population is selectively impaired on this error correction task. These results potentially offer leverage both on the mechanisms underlying the hypercorrection effect and on reasons for older adults' memory impairments, as well as on memory functions that are spared.
Emotion perception and overconfidence in errors under stress in psychosis.
Köther, Ulf; Lincoln, Tania M; Moritz, Steffen
2018-03-21
Vulnerability stress models are well-accepted in psychosis research, but the mechanisms that link stress to psychotic symptoms remain vague. Little is known about how social cognition and overconfidence in errors, two putative mechanisms for the pathogenesis of delusions, relate to stress. Using a repeated measures design, we tested four groups (N=120) with different liability to psychosis (schizophrenia patients [n=35], first-degree relatives [n=24], participants with attenuated positive symptoms [n=19] and healthy controls [n=28]) and depression patients (n=14) as a clinical control group under three randomized experimental conditions (no stress, noise and social stress). Parallel versions of the Emotion Perception and Confidence Task, which taps both emotion perception and confidence, were used in each condition. We recorded subjective stress, heart rate, skin conductance level and salivary cortisol to assess the stress response across different dimensions. Independent of the stress condition, patients with schizophrenia showed poorer emotion perception performance and higher confidence in emotion perception errors than participants with attenuated positive symptoms and healthy controls. However, they did not differ from patients with depression or first-degree relatives. Stress did not influence emotion perception or the extent of high-confident errors, but patients with schizophrenia showed an increase in high-confident emotion perception errors conditional on higher arousal. A possible clinical implication of our findings is the necessity to provide stress management programs that aim to reduce arousal. Moreover, patients with schizophrenia might benefit from interventions that help them to reduce overconfidence in their social cognition judgements in times in which they feel being under pressure. Copyright © 2018 Elsevier B.V. All rights reserved.
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
Hypercorrection of high confidence errors in lexical representations.
Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa
2013-08-01
Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly.
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y
2013-08-29
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.
ERIC Educational Resources Information Center
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
Cole, Stephen R.; Jacobson, Lisa P.; Tien, Phyllis C.; Kingsley, Lawrence; Chmiel, Joan S.; Anastos, Kathryn
2010-01-01
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding. PMID:19934191
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
Surprising feedback improves later memory.
Fazio, Lisa K; Marsh, Elizabeth J
2009-02-01
The hypercorrection effect is the finding that high-confidence errors are more likely to be corrected after feedback than are low-confidence errors (Butterfield & Metcalfe, 2001). In two experiments, we explored the idea that the hypercorrection effect results from increased attention to surprising feedback. In Experiment 1, participants were more likely to remember the appearance of the presented feedback when the feedback did not match expectations. In Experiment 2, we replicated this effect using more distinctive sources and also demonstrated the hypercorrection effect in this modified paradigm. Overall, participants better remembered both the surface features and the content of surprising feedback.
Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.
2014-01-01
A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y.
2013-01-01
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness. PMID:24009548
Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas
2008-01-01
This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills. PMID:24149905
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2011-09-01
In three experiments search termination decisions were examined as a function of response type (correct vs. incorrect) and confidence. It was found that the time between the last retrieved item and the decision to terminate search (exit latency) was related to the type of response and confidence in the last item retrieved. Participants were willing to search longer when the last retrieved item was a correct item vs. an incorrect item and when the confidence was high in the last retrieved item. It was also found that the number of errors retrieved during the recall period was related to search termination decisions such that the more errors retrieved, the more likely participants were to terminate the search. Finally, it was found that knowledge of overall search set size influenced the time needed to search for items, but did not influence search termination decisions. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ziemba, Alexander; El Serafy, Ghada
2016-04-01
Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.
Auslander, Margeaux V; Thomas, Ayanna K; Gutchess, Angela H
2017-01-01
Background/Study Context: The present experiment investigated the role of confidence and control beliefs in susceptibility to the misinformation effect in young and older adults. Control beliefs are perceptions about one's abilities or competence and the extent to which one can influence performance outcomes. It was predicted that level of control beliefs would influence misinformation susceptibility and overall memory confidence. Fifty university students (ages 18-26) and 37 community-dwelling older adults (ages 62-86) were tested. Participants viewed a video, answered questions containing misinformation, and then completed a source-recognition test to determine whether the information presented was seen in the video, the questionnaire only, both, or neither. For each response, participants indicated their level of confidence. The relationship between control beliefs and memory performance was moderated by confidence. That is, individuals with lower control beliefs made more errors as confidence decreased. Additionally, the relationship between confidence and memory performance differed by age, with greater confidence related to more errors for young adults. Confidence is an important factor in how control beliefs and age are related to memory errors in the misinformation effect. This may have implications for the legal system, particularly with eyewitness testimony. The confidence of an individual should be considered if the eyewitness is a younger adult.
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.
2010-01-01
Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877
Four Bootstrap Confidence Intervals for the Binomial-Error Model.
ERIC Educational Resources Information Center
Lin, Miao-Hsiang; Hsiung, Chao A.
1992-01-01
Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Hanson, Lisa C; Taylor, Nicholas F; McBurney, Helen
2016-09-01
To determine the retest reliability of the 10m incremental shuttle walk test (ISWT) in a mixed cardiac rehabilitation population. Participants completed two 10m ISWTs in a single session in a repeated measures study. Ten participants completed a third 10m ISWT as part of a pilot study. Hospital physiotherapy department. 62 adults aged a mean of 68 years (SD 10) referred to a cardiac rehabilitation program. Retest reliability of the 10m ISWT expressed as relative reliability and measurement error. Relative reliability was expressed in a ratio in the form of an intraclass correlation coefficient (ICC) and measurement error in the form of the standard error of measurement (SEM) and 95% confidence intervals for the group and individual. There was a high level of relative reliability over the two walks with an ICC of .99. The SEMagreement was 17m, and a change of at least 23m for the group and 54m for the individual would be required to be 95% confident of exceeding measurement error. The 10m ISWT demonstrated good retest reliability and is sufficiently reliable to be applied in practice in this population without the use of a practice test. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Jackson, Simon A.; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar
2016-01-01
In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants (N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation. PMID:27790170
Jackson, Simon A; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar
2016-01-01
In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants ( N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation.
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2014 CFR
2014-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM... the daily zero (or low-level) CD or the daily high-level CD exceeds two times the limits of the... (or low-level) or high-level CD result exceeds four times the applicable drift specification in...
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.
Wixted, John T; Wells, Gary L
2017-05-01
The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.
The role of unconscious memory errors in judgments of confidence for sentence recognition.
Sampaio, Cristina; Brewer, William F
2009-03-01
The present experiment tested the hypothesis that unconscious reconstructive memory processing can lead to the breakdown of the relationship between memory confidence and memory accuracy. Participants heard deceptive schema-inference sentences and nondeceptive sentences and were tested with either simple or forced-choice recognition. The nondeceptive items showed a positive relation between confidence and accuracy in both simple and forced-choice recognition. However, the deceptive items showed a strong negative confidence/accuracy relationship in simple recognition and a low positive relationship in forced choice. The mean levels of confidence for erroneous responses for deceptive items were inappropriately high in simple recognition but lower in forced choice. These results suggest that unconscious reconstructive memory processes involved in memory for the deceptive schema-inference items led to inaccurate confidence judgments and that, when participants were made aware of the deceptive nature of the schema-inference items through the use of a forced-choice procedure, they adjusted their confidence accordingly.
Large Sample Confidence Limits for Goodman and Kruskal's Proportional Prediction Measure TAU-b
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W.
1976-01-01
A Fortran Extended program which computes Goodman and Kruskal's Tau-b, its asymmetrical counterpart, Tau-a, and three sets of confidence limits for each coefficient under full multinomial and proportional stratified sampling is presented. A correction of an error in the calculation of the large sample standard error of Tau-b is discussed.…
NASA Astrophysics Data System (ADS)
Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.
2017-12-01
Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.
Hall, Karina M; Brieger, Daniel G; De Silva, Sukhita H; Pfister, Benjamin F; Youlden, Daniel J; John-Leader, Franklin; Pit, Sabrina W
2016-01-01
Objectives . To determine the confidence and ability to use condoms correctly and consistently and the predictors of confidence in young Australians attending a festival. Methods . 288 young people aged 18 to 29 attending a mixed-genre music festival completed a survey measuring demographics, self-reported confidence using condoms, ability to use condoms, and issues experienced when using condoms in the past 12 months. Results . Self-reported confidence using condoms was high (77%). Multivariate analyses showed confidence was associated with being male ( P < 0.001) and having had five or more lifetime sexual partners ( P = 0.038). Reading packet instructions was associated with increased condom use confidence ( P = 0.011). Amongst participants who had used a condom in the last year, 37% had experienced the condom breaking and 48% had experienced the condom slipping off during intercourse and 51% when withdrawing the penis after sex. Conclusion . This population of young people are experiencing high rates of condom failures and are using them inconsistently or incorrectly, demonstrating the need to improve attitudes, behaviour, and knowledge about correct and consistent condom usage. There is a need to empower young Australians, particularly females, with knowledge and confidence in order to improve condom use self-efficacy.
Forensic surface metrology: tool mark evidence.
Gambino, Carol; McLaughlin, Patrick; Kuo, Loretta; Kammerman, Frani; Shenkin, Peter; Diaczuk, Peter; Petraco, Nicholas; Hamby, James; Petraco, Nicholas D K
2011-01-01
Over the last several decades, forensic examiners of impression evidence have come under scrutiny in the courtroom due to analysis methods that rely heavily on subjective morphological comparisons. Currently, there is no universally accepted system that generates numerical data to independently corroborate visual comparisons. Our research attempts to develop such a system for tool mark evidence, proposing a methodology that objectively evaluates the association of striated tool marks with the tools that generated them. In our study, 58 primer shear marks on 9 mm cartridge cases, fired from four Glock model 19 pistols, were collected using high-resolution white light confocal microscopy. The resulting three-dimensional surface topographies were filtered to extract all "waviness surfaces"-the essential "line" information that firearm and tool mark examiners view under a microscope. Extracted waviness profiles were processed with principal component analysis (PCA) for dimension reduction. Support vector machines (SVM) were used to make the profile-gun associations, and conformal prediction theory (CPT) for establishing confidence levels. At the 95% confidence level, CPT coupled with PCA-SVM yielded an empirical error rate of 3.5%. Complementary, bootstrap-based computations for estimated error rates were 0%, indicating that the error rate for the algorithmic procedure is likely to remain low on larger data sets. Finally, suggestions are made for practical courtroom application of CPT for assigning levels of confidence to SVM identifications of tool marks recorded with confocal microscopy. Copyright © 2011 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Intertester agreement in refractive error measurements.
Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang
2013-10-01
To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.
Ensemble of classifiers for confidence-rated classification of NDE signal
NASA Astrophysics Data System (ADS)
Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish
2016-02-01
Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.
Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331
Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V
2014-01-01
Background To assess the prevalence of vision impairment and refractive error in school children 12–15 years of age in Ba Ria – Vung Tau province, Vietnam. Design Prospective, cross-sectional study. Participants 2238 secondary school children. Methods Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Main Outcome Measures Visual acuity and principal cause of vision impairment. Results The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5–26.3) and 12.2% (95% confidence interval, 8.8–15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (–0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8–28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0–0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2–1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Conclusions Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. PMID:24299145
Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V
2014-04-01
To assess the prevalence of vision impairment and refractive error in school children 12-15 years of age in Ba Ria - Vung Tau province, Vietnam. Prospective, cross-sectional study. 2238 secondary school children. Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Visual acuity and principal cause of vision impairment. The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5-26.3) and 12.2% (95% confidence interval, 8.8-15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (-0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8-28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0-0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2-1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. © 2013 The Authors. Clinical & Experimental Ophthalmology published by Wiley Publishing Asia Pty Ltd on behalf of Royal Australian and New Zealand College of Ophthalmologists.
Local-search based prediction of medical image registration error
NASA Astrophysics Data System (ADS)
Saygili, Görkem
2018-03-01
Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.
Disclosure of Medical Errors in Oman
Norrish, Mark I. K.
2015-01-01
Objectives: This study aimed to provide insight into the preferences for and perceptions of medical error disclosure (MED) by members of the public in Oman. Methods: Between January and June 2012, an online survey was used to collect responses from 205 members of the public across five governorates of Oman. Results: A disclosure gap was revealed between the respondents’ preferences for MED and perceived current MED practices in Oman. This disclosure gap extended to both the type of error and the person most likely to disclose the error. Errors resulting in patient harm were found to have a strong influence on individuals’ perceived quality of care. In addition, full disclosure was found to be highly valued by respondents and able to mitigate for a perceived lack of care in cases where medical errors led to damages. Conclusion: The perceived disclosure gap between respondents’ MED preferences and perceptions of current MED practices in Oman needs to be addressed in order to increase public confidence in the national health care system. PMID:26052463
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Using an R Shiny to Enhance the Learning Experience of Confidence Intervals
ERIC Educational Resources Information Center
Williams, Immanuel James; Williams, Kelley Kim
2018-01-01
Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…
ERIC Educational Resources Information Center
Goedeme, Tim
2013-01-01
If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2010 CFR
2010-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...
40 CFR Appendix F to Part 60 - Quality Assurance Procedures
Code of Federal Regulations, 2011 CFR
2011-07-01
... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...
Opar, David A; Piatkowski, Timothy; Williams, Morgan D; Shield, Anthony J
2013-09-01
Reliability and case-control injury study. To determine if a novel device designed to measure eccentric knee flexor strength via the Nordic hamstring exercise displays acceptable test-retest reliability; to determine normative values for eccentric knee flexor strength derived from the device in individuals without a history of hamstring strain injury (HSI); and to determine if the device can detect weakness in elite athletes with a previous history of unilateral HSI. HSI and reinjury are the most common cause of lost playing time in a number of sports. Eccentric knee flexor weakness is a major modifiable risk factor for future HSI. However, at present, there is a lack of easily accessible equipment to assess eccentric knee flexor strength. Thirty recreationally active males without a history of HSI completed the Nordic hamstring exercise on the device on 2 separate occasions. Intraclass correlation coefficients, typical error, typical error as a coefficient of variation, and minimal detectable change at a 95% confidence level were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed the Nordic hamstring exercise on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. The device displayed high to moderate reliability (intraclass correlation coefficient = 0.83-0.90; typical error, 21.7-27.5 N; typical error as a coefficient of variation, 5.8%-8.5%; minimal detectable change at a 95% confidence level, 60.1-76.2 N). Mean ± SD normative eccentric flexor strength in the uninjured group was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limb was 15% weaker than the contralateral uninjured limb (mean difference, 50.3 N; 95% confidence interval: 25.7, 74.9; P<.01), 15% weaker than the normative left limb (mean difference, 50.0 N; 95% confidence interval: 1.4, 98.5; P = .04), and 18% weaker than the normative right limb (mean difference, 66.5 N; 95% confidence interval: 18.0, 115.1; P<.01). The experimental device offers a reliable method to measure eccentric knee flexor strength and strength asymmetry and to detect residual weakness in previously injured elite athletes.
Mesolimbic confidence signals guide perceptual learning in the absence of external feedback
Guggenmos, Matthias; Wilbertz, Gregor; Hebart, Martin N; Sterzer, Philipp
2016-01-01
It is well established that learning can occur without external feedback, yet normative reinforcement learning theories have difficulties explaining such instances of learning. Here, we propose that human observers are capable of generating their own feedback signals by monitoring internal decision variables. We investigated this hypothesis in a visual perceptual learning task using fMRI and confidence reports as a measure for this monitoring process. Employing a novel computational model in which learning is guided by confidence-based reinforcement signals, we found that mesolimbic brain areas encoded both anticipation and prediction error of confidence—in remarkable similarity to previous findings for external reward-based feedback. We demonstrate that the model accounts for choice and confidence reports and show that the mesolimbic confidence prediction error modulation derived through the model predicts individual learning success. These results provide a mechanistic neurobiological explanation for learning without external feedback by augmenting reinforcement models with confidence-based feedback. DOI: http://dx.doi.org/10.7554/eLife.13388.001 PMID:27021283
Kim, Joo Hyoung; Cha, Jung Yul; Hwang, Chung Ju
2012-12-01
This in vitro study was undertaken to evaluate the physical, chemical, and biological properties of commercially available metal orthodontic brackets in South Korea, because national standards for these products are lacking. FOUR BRACKET BRANDS WERE TESTED FOR DIMENSIONAL ACCURACY, (MANUFACTURING ERRORS IN ANGULATION AND TORQUE), CYTOTOXICITY, COMPOSITION, ELUTION, AND CORROSION: Archist (Daeseung Medical), Victory (3M Unitek), Kosaka (Tomy), and Confidence (Shinye Odontology Materials). The tested rackets showed no significant differences in manufacturing errors in angulation, but Confidence brackets showed a significant difference in manufacturing errors in torque. None of the brackets were cytotoxic to mouse fibroblasts. The metal ion components did not show a regular increasing or decreasing trend of elution over time, but the volume of the total eluted metal ions increased: Archist brackets had the maximal Cr elution and Confidence brackets appeared to have the largest volume of total eluted metal ions because of excessive Ni elution. Confidence brackets showed the lowest corrosion resistance during potentiodynamic polarization. The results of this study could potentially be applied in establishing national standards for metal orthodontic brackets and in evaluating commercially available products.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
Hunter, Jessica Ezzell; Allen, Emily Graves; Shin, Mikyong; Bean, Lora J H; Correa, Adolfo; Druschel, Charlotte; Hobbs, Charlotte A; O'Leary, Leslie A; Romitti, Paul A; Royle, Marjorie H; Torfs, Claudine P; Freeman, Sallie B; Sherman, Stephanie L
2013-09-01
Advanced maternal age and altered recombination are known risk factors for Down syndrome cases due to maternal nondisjunction of chromosome 21, whereas the impact of other environmental and genetic factors is unclear. The aim of this study was to investigate an association between low maternal socioeconomic status and chromosome 21 nondisjunction. Data from 714 case and 977 control families were used to assess chromosome 21 meiosis I and meiosis II nondisjunction errors in the presence of three low socioeconomic status factors: (i) both parents had not completed high school, (ii) both maternal grandparents had not completed high school, and (iii) an annual household income of <$25,000. We applied logistic regression models and adjusted for covariates, including maternal age and race/ethnicity. As compared with mothers of controls (n = 977), mothers with meiosis II chromosome 21 nondisjunction (n = 182) were more likely to have a history of one low socioeconomic status factor (odds ratio = 1.81; 95% confidence interval = 1.07-3.05) and ≥2 low socioeconomic status factors (odds ratio = 2.17; 95% confidence interval = 1.02-4.63). This association was driven primarily by having a low household income (odds ratio = 1.79; 95% confidence interval = 1.14-2.73). The same statistically significant association was not detected among maternal meiosis I errors (odds ratio = 1.31; 95% confidence interval = 0.81-2.10), in spite of having a larger sample size (n = 532). We detected a significant association between low maternal socioeconomic status and meiosis II chromosome 21 nondisjunction. Further studies are warranted to explore which aspects of low maternal socioeconomic status, such as environmental exposures or poor nutrition, may account for these results.
Cust, Anne E; Armstrong, Bruce K; Smith, Ben J; Chau, Josephine; van der Ploeg, Hidde P; Bauman, Adrian
2009-05-01
Self-reported confidence ratings have been used in other research disciplines as a tool to assess data quality, and may be useful in epidemiologic studies. We examined whether self-reported confidence in recall of physical activity was a predictor of the validity and retest reliability of physical activity measures from the European Prospective Investigation into Cancer and Nutrition (EPIC) past-year questionnaire and the International Physical Activity Questionnaire (IPAQ) last-7-day questionnaire.During 2005-2006 in Sydney, Australia, 97 men and 80 women completed both questionnaires at baseline and at 10 months and wore an accelerometer as an objective comparison measure for three 7-day periods during the same timeframe. Participants rated their confidence in recalling physical activity for each question using a 5-point scale and were dichotomized at the median confidence value. Participants in the high-confidence group had higher validity and repeatability coefficients than those in the low-confidence group for most comparisons. The differences were most apparent for validity of IPAQ moderate activity: Spearman correlation rho = 0.34 (95% confidence interval [CI] = 0.08 to 0.55) and 0.01 (-0.17 to 0.20) for high- and low-confidence groups, respectively; and repeatability of EPIC household activity: rho = 0.81 (0.72 to 0.87) and 0.63 (0.48 to 0.74), respectively, and IPAQ vigorous activity: rho = 0.58 (0.43 to 0.70) and 0.29 (0.07 to 0.49), respectively. Women were less likely than men to report high recall confidence of past-year activity (adjusted odds ratio = 0.38; 0.18 to 0.80). Confidence ratings could be useful as indicators of recall accuracy (ie, validity and repeatability) of physical activity measures, and possibly for detecting differential measurement error and identifying questionnaire items that require improvement.
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
Mayer, Jutta S.; Park, Sohee
2014-01-01
Working memory (WM) impairment is a core feature of schizophrenia, but the contributions of different WM components are not yet specified. Here, we investigated the potential role of inefficient encoding in reduced WM performance in patients with schizophrenia (PSZ). Twenty-eight PSZ, 16 patients with bipolar disorder (PBP), 16 unaffected and unmedicated relatives of PSZ (REL), and 29 demographically matched healthy controls (HC) performed a spatial delayed response task with either low or high WM demands. The demands on attentional selection were also manipulated by presenting distractor stimuli during encoding in some of the trials. After each trial, participants rated their level of response confidence. This allowed us to analyze different types of WM responses. WM was severely impaired in PSZ compared to HC; this reduction was mainly due to an increase in the amount of false memory responses (incorrect responses that were given with high confidence) rather than an increase in the amount of incorrect and not-confident responses. Although PBP showed WM impairments, they did not have increased false memory errors. In contrast, reduced WM in REL was also accompanied by an increase in false memory errors. The presentation of distractors led to a decline in WM performance, which was comparable across groups indicating that attentional selection was intact in PSZ. These findings suggest that inefficient WM encoding is responsible for impaired WM in schizophrenia and point to differential mechanisms underlying WM impairments in PSZ and PBP. PMID:22708888
High-frequency signal and noise estimates of CSR GRACE RL04
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.
2012-12-01
A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.
The Impact of Incident Disclosure Behaviors on Medical Malpractice Claims.
Giraldo, Priscila; Sato, Luke; Castells, Xavier
2017-06-30
To provide preliminary estimates of incident disclosure behaviors on medical malpractice claims. We conducted a descriptive analysis of data on medical malpractice claims obtained from the Controlled Risk Insurance Company and Risk Management Foundation of Harvard Medical Institutions (Cambridge, Massachusetts) between 2012 and 2013 (n = 434). The characteristics of disclosure and apology after medical errors were analyzed. Of 434 medical malpractice claims, 4.6% (n = 20) medical errors had been disclosed to the patient at the time of the error, and 5.9% (n = 26) had been followed by disclosure and apology. The highest number of disclosed injuries occurred in 2011 (23.9%; n = 11) and 2012 (34.8%; n = 16). There was no incremental increase during the financial years studied (2012-2013). The mean age of informed patients was 52.96 years, 58.7 % of the patients were female, and 52.2% were inpatients. Of the disclosed errors, 26.1% led to an adverse reaction, and 17.4% were fatal. The cause of disclosed medical error was improper surgical performance in 17.4% (95% confidence interval, 6.4-28.4). Disclosed medical errors were classified as medium severity in 67.4%. No apology statement was issued in 54.5% of medical errors classified as high severity. At the health-care centers studied, when a claim followed a medical error, providers infrequently disclosed medical errors or apologized to the patient or relatives. Most of the medical errors followed by disclosure and apology were classified as being of high and medium severity. No changes were detected in the volume of lawsuits over time.
Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2006-01-01
A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.
Prevalence of refractive error in Europe: the European Eye Epidemiology (E(3)) Consortium.
Williams, Katie M; Verhoeven, Virginie J M; Cumberland, Phillippa; Bertelsen, Geir; Wolfram, Christian; Buitendijk, Gabriëlle H S; Hofman, Albert; van Duijn, Cornelia M; Vingerling, Johannes R; Kuijpers, Robert W A M; Höhn, René; Mirshahi, Alireza; Khawaja, Anthony P; Luben, Robert N; Erke, Maja Gran; von Hanno, Therese; Mahroo, Omar; Hogg, Ruth; Gieger, Christian; Cougnard-Grégoire, Audrey; Anastasopoulos, Eleftherios; Bron, Alain; Dartigues, Jean-François; Korobelnik, Jean-François; Creuzot-Garcher, Catherine; Topouzis, Fotis; Delcourt, Cécile; Rahi, Jugnoo; Meitinger, Thomas; Fletcher, Astrid; Foster, Paul J; Pfeiffer, Norbert; Klaver, Caroline C W; Hammond, Christopher J
2015-04-01
To estimate the prevalence of refractive error in adults across Europe. Refractive data (mean spherical equivalent) collected between 1990 and 2013 from fifteen population-based cohort and cross-sectional studies of the European Eye Epidemiology (E(3)) Consortium were combined in a random effects meta-analysis stratified by 5-year age intervals and gender. Participants were excluded if they were identified as having had cataract surgery, retinal detachment, refractive surgery or other factors that might influence refraction. Estimates of refractive error prevalence were obtained including the following classifications: myopia ≤-0.75 diopters (D), high myopia ≤-6D, hyperopia ≥1D and astigmatism ≥1D. Meta-analysis of refractive error was performed for 61,946 individuals from fifteen studies with median age ranging from 44 to 81 and minimal ethnic variation (98 % European ancestry). The age-standardised prevalences (using the 2010 European Standard Population, limited to those ≥25 and <90 years old) were: myopia 30.6 % [95 % confidence interval (CI) 30.4-30.9], high myopia 2.7 % (95 % CI 2.69-2.73), hyperopia 25.2 % (95 % CI 25.0-25.4) and astigmatism 23.9 % (95 % CI 23.7-24.1). Age-specific estimates revealed a high prevalence of myopia in younger participants [47.2 % (CI 41.8-52.5) in 25-29 years-olds]. Refractive error affects just over a half of European adults. The greatest burden of refractive error is due to myopia, with high prevalence rates in young adults. Using the 2010 European population estimates, we estimate there are 227.2 million people with myopia across Europe.
Doré, Marie-Claire; Caza, Nicole; Gingras, Nathalie; Rouleau, Nancie
2007-11-01
Findings from the literature consistently revealed episodic memory deficits in adolescents with psychosis. However, the nature of the dysfunction remains unclear. Based on a cognitive neuropsychological approach, a theoretically driven paradigm was used to generate valid interpretations about the underlying memory processes impaired in these patients. A total of 16 inpatient adolescents with psychosis and 19 individually matched controls were assessed using an experimental task designed to measure memory for source and temporal context of studied words. Retrospective confidence judgements for source and temporal context responses were also assessed. On word recognition, patients had more difficulty than controls discriminating target words from neutral distractors. In addition, patients identified both source and temporal context features of recognised items less often than controls. Confidence judgements analyses revealed that the difference between the proportions of correct and incorrect responses made with high confidence was lower in patients than in controls. In addition, the proportion of high-confident responses that were errors was higher in patients compared to controls. These findings suggest impaired relational binding processes in adolescents with psychosis, resulting in a difficulty to create unified memory representations. Our findings on retrospective confidence data point to impaired monitoring of retrieved information that may also impair memory performance in these individuals.
Destination memory accuracy and confidence in younger and older adults.
Johnson, Tara L; Jefferson, Susan C
2018-01-01
Background/Study Context: Nascent research on destination memory-remembering to whom we tell particular information-suggested that older adults have deficits in destination memory and are more confident on inaccurate responses than younger adults. This study assessed the effects of age, attentional resources, and mental imagery on destination memory accuracy and confidence in younger and older adults. Using computer format, participants told facts to pictures of famous people in one of four conditions (control, self-focus, refocus, imagery). Older adults had lower destination memory accuracy than younger adults, driven by a higher level of false alarms. Whereas younger adults were more confident in accurate answers, older adults were more confident in inaccurate answers. Accuracy across participants was lowest when attention was directed internally but significantly improved when mental imagery was used. Importantly, the age-related differences in false alarms and high-confidence inaccurate answers disappeared when imagery was used. Older adults are more likely than younger adults to commit destination memory errors and are less accurate in related confidence judgments. Furthermore, the use of associative memory strategies may help improve destination memory across age groups, improve the accuracy of confidence judgments in older adults, and decrease age-related destination memory impairment, particularly in young-old adults.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Translating Climate Projections for Bridge Engineering
NASA Astrophysics Data System (ADS)
Anderson, C.; Takle, E. S.; Krajewski, W.; Mantilla, R.; Quintero, F.
2015-12-01
A bridge vulnerability pilot study was conducted by Iowa Department of Transportation (IADOT) as one of nineteen pilots supported by the Federal Highway Administration Climate Change Resilience Pilots. Our pilot study team consisted of the IADOT senior bridge engineer who is the preliminary design section leader as well as climate and hydrological scientists. The pilot project culminated in a visual graphic designed by the bridge engineer (Figure 1), and an evaluation framework for bridge engineering design. The framework has four stages. The first two stages evaluate the spatial and temporal resolution needed in climate projection data in order to be suitable for input to a hydrology model. The framework separates streamflow simulation error into errors from the streamflow model and from the coarseness of input weather data series. In the final two stages, the framework evaluates credibility of climate projection streamflow simulations. Using an empirically downscaled data set, projection streamflow is generated. Error is computed in two time frames: the training period of the empirical downscaling methodology, and an out-of-sample period. If large errors in projection streamflow were observed during the training period, it would indicate low accuracy and, therefore, low credibility. If large errors in streamflow were observed during the out-of-sample period, it would mean the approach may not include some causes of change and, therefore, the climate projections would have limited credibility for setting expectations for changes. We address uncertainty with confidence intervals on quantiles of streamflow discharge. The results show the 95% confidence intervals have significant overlap. Nevertheless, the use of confidence intervals enabled engineering judgement. In our discussions, we noted the consistency in direction of change across basins, though the flood mechanism was different across basins, and the high bound of bridge lifetime period quantiles exceeded that of the historical period. This suggested the change was not isolated, and it systemically altered the risk profile. One suggestion to incorporate engineering judgement was to consider degrees of vulnerability using the median discharge of the historical period and the upper bound discharge for the bridge lifetime period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, J; Velsko, S
This report explores the question of whether meaningful conclusions can be drawn regarding the transmission relationship between two microbial samples on the basis of differences observed between the two sample's respective genomes. Unlike similar forensic applications using human DNA, the rapid rate of microbial genome evolution combined with the dynamics of infectious disease require a shift in thinking on what it means for two samples to 'match' in support of a forensic hypothesis. Previous outbreaks for SARS-CoV, FMDV and HIV were examined to investigate the question of how microbial sequence data can be used to draw inferences that link twomore » infected individuals by direct transmission. The results are counter intuitive with respect to human DNA forensic applications in that some genetic change rather than exact matching improve confidence in inferring direct transmission links, however, too much genetic change poses challenges, which can weaken confidence in inferred links. High rates of infection coupled with relatively weak selective pressure observed in the SARS-CoV and FMDV data lead to fairly low confidence for direct transmission links. Confidence values for forensic hypotheses increased when testing for the possibility that samples are separated by at most a few intermediate hosts. Moreover, the observed outbreak conditions support the potential to provide high confidence values for hypothesis that exclude direct transmission links. Transmission inferences are based on the total number of observed or inferred genetic changes separating two sequences rather than uniquely weighing the importance of any one genetic mismatch. Thus, inferences are surprisingly robust in the presence of sequencing errors provided the error rates are randomly distributed across all samples in the reference outbreak database and the novel sequence samples in question. When the number of observed nucleotide mutations are limited due to characteristics of the outbreak or the availability of only partial rather than whole genome sequencing, indel information was shown to have the potential to improve performance but only for select outbreak conditions. In examined HIV transmission cases, extended evolution proved to be the limiting factor in assigning high confidence to transmission links, however, the potential to correct for extended evolution not associated with transmission events is demonstrated. Outbreak specific conditions such as selective pressure (in the form of varying mutation rate), are shown to impact the strength of inference made and a Monte Carlo simulation tool is introduced, which is used to provide upper and lower bounds on the confidence values associated with a forensic hypothesis.« less
Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.
2016-01-01
Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Line segment confidence region-based string matching method for map conflation
NASA Astrophysics Data System (ADS)
Huh, Yong; Yang, Sungchul; Ga, Chillo; Yu, Kiyun; Shi, Wenzhong
2013-04-01
In this paper, a method to detect corresponding point pairs between polygon object pairs with a string matching method based on a confidence region model of a line segment is proposed. The optimal point edit sequence to convert the contour of a target object into that of a reference object was found by the string matching method which minimizes its total error cost, and the corresponding point pairs were derived from the edit sequence. Because a significant amount of apparent positional discrepancies between corresponding objects are caused by spatial uncertainty and their confidence region models of line segments are therefore used in the above matching process, the proposed method obtained a high F-measure for finding matching pairs. We applied this method for built-up area polygon objects in a cadastral map and a topographical map. Regardless of their different mapping and representation rules and spatial uncertainties, the proposed method with a confidence level at 0.95 showed a matching result with an F-measure of 0.894.
Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu
2009-02-01
The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; ...
2015-04-30
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less
McKaig, Donald; Collins, Christine; Elsaid, Khaled A
2014-09-01
A study was conducted to evaluate the impact of a reengineered approach to electronic error reporting at a 719-bed multidisciplinary urban medical center. The main outcome of interest was the monthly reported medication errors during the preimplementation (20 months) and postimplementation (26 months) phases. An interrupted time series analysis was used to describe baseline errors, immediate change following implementation of the current electronic error-reporting system (e-ERS), and trend of error reporting during postimplementation. Errors were categorized according to severity using the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Medication Error Index classifications. Reported errors were further analyzed by reporter and error site. During preimplementation, the monthly reported errors mean was 40.0 (95% confidence interval [CI]: 36.3-43.7). Immediately following e-ERS implementation, monthly reported errors significantly increased by 19.4 errors (95% CI: 8.4-30.5). The change in slope of reported errors trend was estimated at 0.76 (95% CI: 0.07-1.22). Near misses and no-patient-harm errors accounted for 90% of all errors, while errors that caused increased patient monitoring or temporary harm accounted for 9% and 1%, respectively. Nurses were the most frequent reporters, while physicians were more likely to report high-severity errors. Medical care units accounted for approximately half of all reported errors. Following the intervention, there was a significant increase in reporting of prevented errors and errors that reached the patient with no resultant harm. This improvement in reporting was sustained for 26 months and has contributed to designing and implementing quality improvement initiatives to enhance the safety of the medication use process.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin; ...
2016-04-04
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Smalheiser, Neil R; McDonagh, Marian S; Yu, Clement; Adams, Clive E; Davis, John M; Yu, Philip S
2015-01-01
Objective: For many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT. Materials and Methods: The LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article. Results: The model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well. Discussion: Both models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified. Conclusion: Retagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/RCT_Tagger.cgi. PMID:25656516
The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.
Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P
2014-01-01
To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
Augmenting Chinese hamster genome assembly by identifying regions of high confidence.
Vishwanathan, Nandita; Bandyopadhyay, Arpan A; Fu, Hsu-Yuan; Sharma, Mohit; Johnson, Kathryn C; Mudge, Joann; Ramaraj, Thiruvarangan; Onsongo, Getiria; Silverstein, Kevin A T; Jacob, Nitya M; Le, Huong; Karypis, George; Hu, Wei-Shou
2016-09-01
Chinese hamster Ovary (CHO) cell lines are the dominant industrial workhorses for therapeutic recombinant protein production. The availability of genome sequence of Chinese hamster and CHO cells will spur further genome and RNA sequencing of producing cell lines. However, the mammalian genomes assembled using shot-gun sequencing data still contain regions of uncertain quality due to assembly errors. Identifying high confidence regions in the assembled genome will facilitate its use for cell engineering and genome engineering. We assembled two independent drafts of Chinese hamster genome by de novo assembly from shotgun sequencing reads and by re-scaffolding and gap-filling the draft genome from NCBI for improved scaffold lengths and gap fractions. We then used the two independent assemblies to identify high confidence regions using two different approaches. First, the two independent assemblies were compared at the sequence level to identify their consensus regions as "high confidence regions" which accounts for at least 78 % of the assembled genome. Further, a genome wide comparison of the Chinese hamster scaffolds with mouse chromosomes revealed scaffolds with large blocks of collinearity, which were also compiled as high-quality scaffolds. Genome scale collinearity was complemented with EST based synteny which also revealed conserved gene order compared to mouse. As cell line sequencing becomes more commonly practiced, the approaches reported here are useful for assessing the quality of assembly and potentially facilitate the engineering of cell lines. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Four applications of permutation methods to testing a single-mediator model.
Taylor, Aaron B; MacKinnon, David P
2012-09-01
Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, H; Sarkar, V; Paxton, A
Purpose: To explore the feasibility of supraclavicular field treatment by investigating the variation of junction position between tangential and supraclavicular fields during left breast radiation using DIBH technique. Methods: Six patients with left breast cancer treated using DIBH technique were included in this study. AlignRT system was used to track patient’s breast surface. During daily treatment, when the patient’s DIBH reached preset AlignRT tolerance of ±3mm for all principle directions (vertical, longitudinal, and lateral), the remaining longitudinal offset was recorded. The average with standard-deviation and the range of daily longitudinal offset for the entire treatment course were calculated for allmore » six patients (93 fractions totally). The ranges of average ± 1σ and 2σ were calculated, and they represent longitudinal field edge error with the confidence level of 68% and 95%. Based on these longitudinal errors, dose at junction between breast tangential and supraclavicular fields with variable gap/overlap sizes was calculated as a percentage of prescription (on a representative patient treatment plan). Results: The average of longitudinal offset for all patients is 0.16±1.32mm, and the range of longitudinal offset is −2.6 to 2.6mm. The range of longitudinal field edge error at 68% confidence level is −1.48 to 1.16mm, and at 95% confidence level is −2.80 to 2.48mm. With a 5mm and 1mm gap, the junction dose could be as low as 37.5% and 84.9% of prescription dose; with a 5mm and 1mm overlap, the junction dose could be as high as 169.3% and 117.6%. Conclusion: We observed longitudinal field edge error at 95% confidence level is about ±2.5mm, and the junction dose could reach 70% hot/cold between different DIBH. However, over the entire course of treatment, the average junction variation for all patients is within 0.2mm. The results from our study shows it is potentially feasible to treat supraclavicular field with breast tangents.« less
2016-01-01
Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Towards the development of rapid screening techniques for shale gas core properties
NASA Astrophysics Data System (ADS)
Cave, Mark R.; Vane, Christopher; Kemp, Simon; Harrington, Jon; Cuss, Robert
2013-04-01
Shale gas has been produced for many years in the U.S.A. and forms around 8% of total their natural gas production. Recent testing for gas on the Fylde Coast in Lancashire UK suggests there are potentially large reserves which could be exploited. The increasing significance of shale gas has lead to the need for deeper understanding of shale behaviour. There are many factors which govern whether a particular shale will become a shale gas resource and these include: i) Organic matter abundance, type and thermal maturity; ii) Porosity-permeability relationships and pore size distribution; iii) Brittleness and its relationship to mineralogy and rock fabric. Measurements of these properties require sophisticated and time consuming laboratory techniques (Josh et al 2012), whereas rapid screening techniques could provide timely results which could improve the efficiency and cost effectiveness of exploration. In this study, techniques which are portable and provide rapid on-site measurements (X-ray Fluorescence (XRF) and Infra-red (IR) spectroscopy) have been calibrated against standard laboratory techniques (Rock-Eval 6 analyser-Vinci Technologies) and Powder whole-rock XRD analysis was carried out using a PANalytical X'Pert Pro series diffractometer equipped with a cobalt-target tube, X'Celerator detector and operated at 45kV and 40mA, to predict properties of potential shale gas material from core material from the Bowland shale Roosecote, south Cumbria. Preliminary work showed that, amongst various mineralogical and organic matter properties of the core, regression models could be used so that the total organic carbon content could be predicted from the IR spectra with a 95 percentile confidence prediction error of 0.6% organic carbon, the free hydrocarbons could be predicted with a 95 percentile confidence prediction error of 0.6 mgHC/g rock, the bound hydrocarbons could be predicted with a 95 percentile confidence prediction error of 2.4 mgHC/g rock, mica content with a 95 percentile confidence prediction error of 14% and quartz content with a 95 percentile confidence prediction error of 14% . References M. Josh *, L. Esteban, C. Delle Piane, J. Sarout, D.N. Dewhurst, M.B. Clennell 2012. Journal of Petroleum Science and Engineering , 88-89, 107-124.
Implementing an error disclosure coaching model: A multicenter case study.
White, Andrew A; Brock, Douglas M; McCotter, Patricia I; Shannon, Sarah E; Gallagher, Thomas H
2017-01-01
National guidelines call for health care organizations to provide around-the-clock coaching for medical error disclosure. However, frontline clinicians may not always seek risk managers for coaching. As part of a demonstration project designed to improve patient safety and reduce malpractice liability, we trained multidisciplinary disclosure coaches at 8 health care organizations in Washington State. The training was highly rated by participants, although not all emerged confident in their coaching skill. This multisite intervention can serve as a model for other organizations looking to enhance existing disclosure capabilities. Success likely requires cultural change and repeated practice opportunities for coaches. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.
Individual differences in conflict detection during reasoning.
Frey, Darren; Johnson, Eric D; De Neys, Wim
2018-05-01
Decades of reasoning and decision-making research have established that human judgment is often biased by intuitive heuristics. Recent "error" or bias detection studies have focused on reasoners' abilities to detect whether their heuristic answer conflicts with logical or probabilistic principles. A key open question is whether there are individual differences in this bias detection efficiency. Here we present three studies in which co-registration of different error detection measures (confidence, response time and confidence response time) allowed us to assess bias detection sensitivity at the individual participant level in a range of reasoning tasks. The results indicate that although most individuals show robust bias detection, as indexed by increased latencies and decreased confidence, there is a subgroup of reasoners who consistently fail to do so. We discuss theoretical and practical implications for the field.
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Evaluation of dual-tip micromanometers during 21-day implantation in goats
NASA Technical Reports Server (NTRS)
Reister, C. A.; Koenig, S. C.; Schaub, J. D.; Ewert, D. L.; Swope, R. D.; Latham, R. D.; Fanton, J. W.; Convertino, V. A. (Principal Investigator)
1998-01-01
Investigative research efforts using a cardiovascular model required the determination of central circulatory haemodynamic and arterial system parameters for the evaluation of cardiovascular performance. These calculations required continuous beat-to-beat measurement of pressure within the four chambers of the heart and great vessels. Sensitivity and offset drift, longevity, and sources of error for eight 3F dual-tipped micromanometers were determined during 21 days of implantation in goats. Subjects were instrumented with pairs of chronically implanted fluid-filled access catheters in the left and right ventricles, through which dual-tipped (test) micromanometers were chronically inserted and single-tip (standard) micromanometers were acutely inserted. Acutely inserted sensors were calibrated daily and measured pressures were compared in vivo to the chronically inserted sensors. Comparison of the pre- and post-gain calibration of the chronically inserted sensors showed a mean sensitivity drift of 1.0 +/- 0.4% (99% confidence, n = 9 sensors) and mean offset drift of 5.0 +/- 1.5 mmHg (99% confidence, n = 9 sensors). Potential sources of error for these drifts were identified, and included measurement system inaccuracies, temperature drift, hydrostatic column gradients, and dynamic pressure changes. Based upon these findings, we determined that these micromanometers may be chronically inserted in high-pressure chambers for up to 17 days with an acceptable error, but should be limited to acute (hours) insertions in low-pressure applications.
Shim, Seong Hee; Sung, Kyung Rim; Kim, Joon Mo; Kim, Hyun Tae; Jeong, Jinho; Kim, Chan Yun; Lee, Mi Yeon; Park, Ki Ho
2017-01-01
To investigate the prevalence of open-angle glaucoma (OAG) in myopia by age. A cross-sectional study using a stratified, multistage, probability cluster survey. Participants in the Korean National Health and Nutrition Examination Survey between 2010 and 2011 were included. A standardized protocol was used to interview every participant and perform comprehensive ophthalmic examinations. Glaucoma was diagnosed according to the International Society of Geographical and Epidemiological Ophthalmology (ISGEO) criteria. After adjusting for age and sex, there was a positive correlation between OAG prevalence and increasing myopic refractive error except in participants with hyperopia. Younger participants with higher myopic refractive error had higher OAG prevalence than older participants with lower myopic refractive error. Participants with high myopia (OR 3.90, 95% confidence interval (CI) 2.30-6.59) had significantly greater age- and sex-adjusted odd ratios (ORs) than did those with emmetropia who were younger than 60 years. These data suggest that OAG develops earlier in participants with high myopia than in others. There was a high prevalence of OAG in participants with high myopia, even in those 19-29 years of age. Therefore, OAG screening should be performed earlier in participants with high myopia than is suggested by traditional guidelines.
Unit of Measurement Used and Parent Medication Dosing Errors
Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.
2014-01-01
BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742
Unit of measurement used and parent medication dosing errors.
Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L
2014-08-01
Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.
Validation of TRMM precipitation radar monthly rainfall estimates over Brazil
NASA Astrophysics Data System (ADS)
Franchito, Sergio H.; Rao, V. Brahmananda; Vasques, Ana C.; Santo, Clovis M. E.; Conforte, Jorge C.
2009-01-01
In an attempt to validate the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) over Brazil, TRMM PR estimates are compared with rain gauge station data from Agência Nacional de Energia Elétrica (ANEEL). The analysis is conducted on a seasonal basis and considers five geographic regions with different precipitation regimes. The results showed that TRMM PR seasonal rainfall is well correlated with ANEEL rainfall (correlation coefficients are significant at the 99% confidence level) over most of Brazil. The random and systematic errors of TRMM PR are sensitive to seasonal and regional differences. During December to February and March to May, TRMM PR rainfall is reliable over Brazil. In June to August (September to November) TRMM PR estimates are only reliable in the Amazonian and southern (Amazonian and southeastern) regions. In the other regions the relative RMS errors are larger than 50%, indicating that the random errors are high.
Hashemi, Hassan; Rezvan, Farhad; Ostadimoghaddam, Hadi; Abdollahi, Majid; Hashemi, Maryam; Khabazkhoob, Mehdi
2013-01-01
The prevalence of myopia and hyperopia and determinants were determined in a rural population of Iran. Population-based cross-sectional study. Using random cluster sampling, 13 of the 83 villages of Khaf County in the north east of Iran were selected. Data from 2001 people over the age of 15 years were analysed. Visual acuity measurement, non-cycloplegic refraction and eye examinations were done at the Mobile Eye Clinic. The prevalence of myopia and hyperopia based on spherical equivalent worse than -0.5 dioptre and +0.5 dioptre, respectively. The prevalence of myopia, hyperopia and anisometropia in the total study sample was 28% (95% confidence interval: 25.9-30.2), 19.2% (95% confidence interval: 17.3-21.1), and 11.5% (95% confidence interval: 10.0-13.1), respectively. In the over 40 population, the prevalence of myopia and hyperopia was 32.5% (95% confidence interval: 28.9-36.1) and 27.9% (95% confidence interval: 24.5-31.3), respectively. In the multiple regression model for this group, myopia strongly correlated with cataract (odds ratio = 1.98 and 95% confidence interval: 1.33-2.93), and hyperopia only correlated with age (P < 0.001). The prevalence of high myopia and high hyperopia was 1.5% and 4.6%. In the multiple regression model, anisometropia significantly correlated with age (odds ratio = 1.04) and cataract (odds ratio = 5.2) (P < 0.001). The prevalence of myopia and anisometropia was higher than that in previous studies in urban population of Iran, especially in the elderly. Cataract was the only variable that correlated with myopia and anisometropia. © 2013 The Authors. Clinical and Experimental Ophthalmology © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Grid Resolution Study over Operability Space for a Mach 1.7 Low Boom External Compression Inlet
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.
2014-01-01
This paper presents a statistical methodology whereby the probability limits associated with CFD grid resolution of inlet flow analysis can be determined which provide quantitative information on the distribution of that error over the specified operability range. The objectives of this investigation is to quantify the effects of both random (accuracy) and systemic (biasing) errors associated with grid resolution in the analysis of the Lockheed Martin Company (LMCO) N+2 Low Boom external compression supersonic inlet. The study covers the entire operability space as defined previously by the High Speed Civil Transport (HSCT) High Speed Research (HSR) program goals. The probability limits in terms of a 95.0% confidence interval on the analysis data were evaluated for four ARP1420 inlet metrics, namely (1) total pressure recovery (PFAIP), (2) radial hub distortion (DPH/P), (3) ) radial tip distortion (DPT/P), and (4) ) circumferential distortion (DPC/P). In general, the resulting +/-0.95 delta Y interval was unacceptably large in comparison to the stated goals of the HSCT program. Therefore, the conclusion was reached that the "standard grid" size was insufficient for this type of analysis. However, in examining the statistical data, it was determined that the CFD analysis results at the outer fringes of the operability space were the determining factor in the measure of statistical uncertainty. Adequate grids are grids that are free of biasing (systemic) errors and exhibit low random (precision) errors in comparison to their operability goals. In order to be 100% certain that the operability goals have indeed been achieved for each of the inlet metrics, the Y+/-0.95 delta Y limit must fall inside the stated operability goals. For example, if the operability goal for DPC/P circumferential distortion is =0.06, then the forecast Y for DPC/P plus the 95% confidence interval on DPC/P, i.e. +/-0.95 delta Y, must all be less than or equal to 0.06.
The Mathematics of Computer Error.
ERIC Educational Resources Information Center
Wood, Eric
1988-01-01
Why a computer error occurred is considered by analyzing the binary system and decimal fractions. How the computer stores numbers is then described. Knowledge of the mathematics behind computer operation is important if one wishes to understand and have confidence in the results of computer calculations. (MNS)
Patient safety awareness among Undergraduate Medical Students in Pakistani Medical School.
Kamran, Rizwana; Bari, Attia; Khan, Rehan Ahmed; Al-Eraky, Mohamed
2018-01-01
To measure the level of awareness of patient safety among undergraduate medical students in Pakistani Medical School and to find the difference with respect to gender and prior experience with medical error. This cross-sectional study was conducted at the University of Lahore (UOL), Pakistan from January to March 2017, and comprised final year medical students. Data was collected using a questionnaire 'APSQ- III' on 7 point Likert scale. Eight questions were reverse coded. Survey was anonymous. SPSS package 20 was used for statistical analysis. Questionnaire was filled by 122 students, with 81% response rate. The best score 6.17 was given for the 'team functioning', followed by 6.04 for 'long working hours as a cause of medical error'. The domains regarding involvement of patient, confidence to report medical errors and role of training and learning on patient safety scored high in the agreed range of >5. Reverse coded questions about 'professional incompetence as an error cause' and 'disclosure of errors' showed negative perception. No significant differences of perceptions were found with respect to gender and prior experience with medical error (p= >0.05). Undergraduate medical students at UOL had a positive attitude towards patient safety. However, there were misconceptions about causes of medical errors and error disclosure among students and patient safety education needs to be incorporated in medical curriculum of Pakistan.
Ballesteros Peña, Sendoa
2013-04-01
To estimate the frequency of therapeutic errors and to evaluate the diagnostic accuracy in the recognition of shockable rhythms by automated external defibrillators. A retrospective descriptive study. Nine basic life support units from Biscay (Spain). Included 201 patients with cardiac arrest, since 2006 to 2011. The study was made of the suitability of treatment (shock or not) after each analysis and medical errors identified. The sensitivity, specificity and predictive values with 95% confidence intervals were then calculated. A total of 811 electrocardiographic rhythm analyses were obtained, of which 120 (14.1%), from 30 patients, corresponded to shockable rhythms. Sensitivity and specificity for appropriate automated external defibrillators management of a shockable rhythm were 85% (95% CI, 77.5% to 90.3%) and 100% (95% CI, 99.4% to 100%), respectively. Positive and negative predictive values were 100% (95% CI, 96.4% to 100%) and 97.5% (95% CI, 96% to 98.4%), respectively. There were 18 (2.2%; 95% CI, 1.3% to 3.5%) errors associated with defibrillator management, all relating to cases of shockable rhythms that were not shocked. One error was operator dependent, 6 were defibrillator dependent (caused by interaction of pacemakers), and 11 were unclassified. Automated external defibrillators have a very high specificity and moderately high sensitivity. There are few operator dependent errors. Implanted pacemakers interfere with defibrillator analyses. Copyright © 2012 Elsevier España, S.L. All rights reserved.
2013-09-01
based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siranosian, Antranik Antonio; Schembri, Philip Edward; Miller, Nathan Andrew
The Benchmark Extensible Tractable Testbed Engineering Resource (BETTER) is proposed as a family of modular test bodies that are intended to support engineering capability development by helping to identify weaknesses and needs. Weapon systems, subassemblies, and components are often complex and difficult to test and analyze, resulting in low confidence and high uncertainties in experimental and simulated results. The complexities make it difficult to distinguish between inherent uncertainties and errors due to insufficient capabilities. BETTER test bodies will first use simplified geometries and materials such that testing, data collection, modeling and simulation can be accomplished with high confidence and lowmore » uncertainty. Modifications and combinations of simple and well-characterized BETTER test bodies can then be used to increase complexity in order to reproduce relevant mechanics and identify weaknesses. BETTER can provide both immediate and long-term improvements in testing and simulation capabilities. This document presents the motivation, concept, benefits and examples for BETTER.« less
Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng
2014-01-01
The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532
Fraction Operations: An Examination of Prospective Teachers' Errors Confidence, and Bias
ERIC Educational Resources Information Center
Young, Elaine; Zientek, Linda
2011-01-01
Fractions are important in young students' understanding of rational numbers and proportional reasoning. The teacher is fundamental in developing student understanding and competency in working with fractions. The present study spanned five years and investigated prospective teachers' competency and confidence with fraction operations as they…
Correcting a Metacognitive Error: Feedback Increases Retention of Low-Confidence Correct Responses
ERIC Educational Resources Information Center
Butler, Andrew C.; Karpicke, Jeffrey D.; Roediger, Henry L., III
2008-01-01
Previous studies investigating posttest feedback have generally conceptualized feedback as a method for correcting erroneous responses, giving virtually no consideration to how feedback might promote learning of correct responses. Here, the authors show that when correct responses are made with low confidence, feedback serves to correct this…
Mustanski, Brian; Ryan, Daniel T; Garofalo, Robert
2014-07-01
Young men who have sex with men (YMSM) are disproportionately infected with sexually transmitted infections (STIs). Condom use is the most widely available means of preventing the transmission of STIs, but effectiveness depends on correct use. Condom errors such as using an oil-based lubricant have been associated with condom failures such as breakage. Little research has been done on the impact of condom problems on the likelihood of contracting an STI. Data came from Crew 450, a longitudinal study of HIV risk among YMSM (N = 450). All self-report data were collected using computer-assisted self-interview technology, and clinical testing was done for gonorrhea, chlamydia, and HIV. Nearly all participants made at least 1 error, with high rates of using oil-based lubricant and incomplete use. No differences were found in rates of condom problems during anal sex with a man versus vaginal sex with a woman. Black YMSM reported significantly higher use of oil-based lubricants than white and Hispanic YMSM, an error significantly associated with HIV status (adjusted odds ratio, 2.60; 95% confidence interval, 1.04-6.51). Participants who reported a condom failure were significantly more likely to have an STI (adjusted odds ratio, 3.27; 95% confidence interval, 1.31-8.12). Young men who have sex with men report high rates of condom problems, and condom failures were significantly associated with STIs after controlling for unprotected sex. Educational programs are needed to enhance correct condom use among YMSM. Further research is needed on the role of oil-based lubricants in explaining racial disparities in STIs and HIV.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W
2017-06-01
In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.
Study of style effects on OCR errors in the MEDLINE database
NASA Astrophysics Data System (ADS)
Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.
2005-01-01
The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.
Smith-Keiling, Beverly L.; Swanson, Lidia K.; Dehnbostel, Joanne M.
2018-01-01
In seeking to support diversity, one challenge lies in adequately supporting and assessing science cognitions in a writing-intensive Biochemistry laboratory course when highly engaged Asian English language learners (Asian ELLs) struggle to communicate and make novice errors in English. Because they may understand advanced science concepts, but are not being adequately assessed for their deeper scientific understanding, we sought and examined interventions. We hypothesized that inquiry strategies, scaffolded learning through peer evaluation, and individualized tools that build writing communication skills would increase confidence. To assess scientific thinking, Linguistic Inquiry Word Count (LIWC) software measured underlying analytic and cognitive features of writing despite grammatical errors. To determine whether interventions improved student experience or learning outcomes, we investigated a cross-sectional sample of cases within experimental groups (n = 19) using a mixed-methods approach. Overall trends of paired t-tests from Asian ELLs’ pre/post surveys showed gains in six measures of writing confidence, with some statistically significant gains in confidence in writing skill (p=0.025) and in theory (p≤0.05). LIWC scores for Asian ELL and native-English-speaking students were comparable except for increased cognitive scores for Asian ELLs and detectable individual differences. An increase in Asian ELLs’ cognitive scores in spring/summer over fall was observed (p = 0.04), likely as a result of greater cognitive processes with language use, inquiry-related interventions, and peer evaluation. Individual cases further elucidated challenges faced by Asian ELL students. LIWC scores of student writing may be useful in determining underlying understanding. Interventions designed to provide support and strengthen the writing of Asian ELL students may also improve their confidence in writing, even if improvement is gradual. PMID:29904544
Improved Margin of Error Estimates for Proportions in Business: An Educational Example
ERIC Educational Resources Information Center
Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael
2015-01-01
This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Suzanne B., E-mail: Suzannne.evans@yale.edu; Yu, James B.; Chagpar, Anees
2012-10-01
Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses:more » 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.« less
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Prevalence of Refractive Errors among High School Students in Western Iran
Hashemi, Hassan; Rezvan, Farhad; Beiranvand, Asghar; Papi, Omid-Ali; Hoseini Yazdi, Hosein; Ostadimoghaddam, Hadi; Yekta, Abbas Ali; Norouzirad, Reza; Khabazkhoob, Mehdi
2014-01-01
Purpose To determine the prevalence of refractive errors among high school students. Methods In a cross-sectional study, we applied stratified cluster sampling on high school students of Aligoudarz, Western Iran. Examinations included visual acuity, non-cycloplegic refraction by autorefraction and fine tuning with retinoscopy. Myopia and hyperopia were defined as spherical equivalent of -0.5/+0.5 diopter (D) or worse, respectively; astigmatism was defined as cylindrical error >0.5 D and anisometropia as an interocular difference in spherical equivalent exceeding 1 D. Results Of 451 selected students, 438 participated in the study (response rate, 97.0%). Data from 434 subjects with mean age of 16±1.3 (range, 14 to 21) years including 212 (48.8%) male subjects was analyzed. The prevalence of myopia, hyperopia and astigmatism was 29.3% [95% confidence interval (CI), 25-33.6%], 21.7% (95%CI, 17.8-25.5%), and 20.7% (95%CI, 16.9-24.6%), respectively. The prevalence of myopia increased significantly with age [odds ratio (OR)=1.30, P=0.003] and was higher among boys (OR=3.10, P<0.001). The prevalence of hyperopia was significantly higher in girls (OR=0.49, P=0.003). The prevalence of astigmatism was 25.9% in boys and 15.8% in girls (OR=2.13, P=0.002). The overall prevalence of high myopia and high hyperopia were 0.5% and 1.2%, respectively. The prevalence of with-the-rule, against-the-rule, and oblique astigmatism was 14.5%, 4.8% and 1.4%, respectively. Overall, 4.6% (95%CI, 2.6-6.6%) of subjects were anisometropic. Conclusion More than half of high school students in Aligoudarz had at least one type of refractive error. Compared to similar studies, the prevalence of refractive errors was high in this age group. PMID:25279126
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley
2004-01-01
This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.
Kim, Eun Chul; Morgan, Ian G.; Kakizaki, Hirohiko; Kang, Seungbum; Jee, Donghyun
2013-01-01
Purpose To examine the prevalence and risk factors of refractive errors in a representative Korean population aged 20 years old or older. Methods A total of 23,392 people aged 20+ years were selected for the Korean National Health and Nutrition Survey 2008–2011, using stratified, multistage, clustered sampling. Refractive error was measured by autorefraction without cycloplegia, and interviews were performed regarding associated risk factors including gender, age, height, education level, parent's education level, economic status, light exposure time, and current smoking history. Results Of 23,392 participants, refractive errors were examined in 22,562 persons, including 21,356 subjects with phakic eyes. The overall prevalences of myopia (< -0.5 D), high myopia (< -6.0 D), and hyperopia (> 0.5 D) were 48.1% (95% confidence interval [CI], 47.4–48.8), 4.0% (CI, 3.7–4.3), and 24.2% (CI, 23.6–24.8), respectively. The prevalence of myopia sharply decreased from 78.9% (CI, 77.4–80.4) in 20–29 year olds to 16.1% (CI, 14.9–17.3) in 60–69 year olds. In multivariable logistic regression analyses restricted to subjects aged 40+ years, myopia was associated with younger age (odds ratio [OR], 0.94; 95% Confidence Interval [CI], 0.93-0.94, p < 0.001), education level of university or higher (OR, 2.31; CI, 1.97–2.71, p < 0.001), and shorter sunlight exposure time (OR, 0.84; CI, 0.76–0.93, p = 0.002). Conclusions This study provides the first representative population-based data on refractive error for Korean adults. The prevalence of myopia in Korean adults in 40+ years (34.7%) was comparable to that in other Asian countries. These results show that the younger generations in Korea are much more myopic than previous generations, and that important factors associated with this increase are increased education levels and reduced sunlight exposures. PMID:24224049
Kim, Eun Chul; Morgan, Ian G; Kakizaki, Hirohiko; Kang, Seungbum; Jee, Donghyun
2013-01-01
To examine the prevalence and risk factors of refractive errors in a representative Korean population aged 20 years old or older. A total of 23,392 people aged 20+ years were selected for the Korean National Health and Nutrition Survey 2008-2011, using stratified, multistage, clustered sampling. Refractive error was measured by autorefraction without cycloplegia, and interviews were performed regarding associated risk factors including gender, age, height, education level, parent's education level, economic status, light exposure time, and current smoking history. Of 23,392 participants, refractive errors were examined in 22,562 persons, including 21,356 subjects with phakic eyes. The overall prevalences of myopia (< -0.5 D), high myopia (< -6.0 D), and hyperopia (> 0.5 D) were 48.1% (95% confidence interval [CI], 47.4-48.8), 4.0% (CI, 3.7-4.3), and 24.2% (CI, 23.6-24.8), respectively. The prevalence of myopia sharply decreased from 78.9% (CI, 77.4-80.4) in 20-29 year olds to 16.1% (CI, 14.9-17.3) in 60-69 year olds. In multivariable logistic regression analyses restricted to subjects aged 40+ years, myopia was associated with younger age (odds ratio [OR], 0.94; 95% Confidence Interval [CI], 0.93-0.94, p < 0.001), education level of university or higher (OR, 2.31; CI, 1.97-2.71, p < 0.001), and shorter sunlight exposure time (OR, 0.84; CI, 0.76-0.93, p = 0.002). This study provides the first representative population-based data on refractive error for Korean adults. The prevalence of myopia in Korean adults in 40+ years (34.7%) was comparable to that in other Asian countries. These results show that the younger generations in Korea are much more myopic than previous generations, and that important factors associated with this increase are increased education levels and reduced sunlight exposures.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
Test-Enhanced Learning in Competence-Based Predoctoral Orthodontics: A Four-Year Study.
Freda, Nicolas M; Lipp, Mitchell J
2016-03-01
Dental educators intend to promote integration of knowledge, skills, and values toward professional competence. Studies report that retrieval, in the form of testing, results in better learning with retention than traditional studying. The aim of this study was to evaluate test-enhanced experiences on demonstrations of competence in diagnosis and management of malocclusion and skeletal problems. The study participants were all third-year dental students (2011 N=88, 2012 N=74, 2013 N=91, 2014 N=85) at New York University College of Dentistry. The 2013 and 2014 groups received the test-enhanced method emphasizing formative assessments with written and dialogic delayed feedback, while the 2011 and 2012 groups received the traditional approach emphasizing lectures and classroom exercises. The students received six two-hour sessions, spaced one week apart. At the final session, a summative assessment consisting of the same four cases was administered. Students constructed a problem list, treatment objectives, and a treatment plan for each case, scored according to the same criteria. Grades were based on the number of cases without critical errors: A=0 critical errors on four cases, A-=0 critical errors on three cases, B+=0 critical errors on two cases, B=0 critical errors on one case, F=critical errors on four cases. Performance grades were categorized as high quality (B+, A-, A) and low quality (F, B). The results showed that the test-enhanced groups demonstrated statistically significant benefits at 95% confidence intervals compared to the traditional groups when comparing low- and high-quality grades. These performance trends support the continued use of the test-enhanced approach.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Daniel, Larry G.
The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.
Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T
2016-03-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.
The prevalence of refractive errors in 6- to 15-year-old schoolchildren in Dezful, Iran.
Norouzirad, Reza; Hashemi, Hassan; Yekta, Abbasali; Nirouzad, Fereidon; Ostadimoghaddam, Hadi; Yazdani, Negareh; Dadbin, Nooshin; Javaherforoushzadeh, Ali; Khabazkhoob, Mehdi
2015-01-01
To determine the prevalence of refractive errors, among 6- to 15-year-old schoolchildren in the city of Dezful in western Iran. In this cross-sectional study, 1375 Dezful schoolchildren were selected through multistage cluster sampling. After obtaining written consent, participants had uncorrected and corrected visual acuity tests and cycloplegic refraction at the school site. Refractive errors were defined as myopia [spherical equivalent (SE) -0.5 diopter (D)], hyperopia (SE ≥ 2.0D), and astigmatism (cylinder error > 0.5D). 1151 (83.7%) schoolchildren participated in the study. Of these, 1130 completed their examinations. 21 individuals were excluded because of poor cooperation and contraindication for cycloplegic refraction. Prevalence of myopia, hyperopia, and astigmatism were 14.9% (95% confidence interval (CI): 10.1-19.6), 12.9% (95% CI: 7.2-18.6), and 45.3% (95% CI: 40.3-50.3), respectively. Multiple logistic regression analysis showed an age-related increase in myopia prevalence (p < 0.001) and a decrease in hyperopia prevalence (p < 0.001). There was a higher prevalence of myopia in boys (p<0.001) and hyperopia in girls (p = 0.007). This study showed a considerably high prevalence of refractive errors among the Iranian population of schoolchildren in Dezful in the west of Iran. The prevalence of myopia is considerably high compared to previous studies in Iran and increases with age.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
Gawęda, Ł; Li, E; Lavoie, S; Whitford, T J; Moritz, S; Nelson, B
2018-01-01
Self-monitoring biases and overconfidence in incorrect judgments have been suggested as playing a role in schizophrenia spectrum disorders. Little is known about whether self-monitoring biases may contribute to early risk factors for psychosis. In this study, action self-monitoring (i.e., discrimination between imagined and performed actions) was investigated, along with confidence in judgments among ultra-high risk (UHR) for psychosis individuals and first-episode psychosis (FEP) patients. Thirty-six UHR for psychosis individuals, 25 FEP patients and 33 healthy controls (CON) participated in the study. Participants were assessed with the Action memory task. Simple actions were presented to participants verbally or non-verbally. Some actions were required to be physically performed and others were imagined. Participants were asked whether the action was presented verbally or non-verbally (action presentation type discrimination), and whether the action was performed or imagined (self-monitoring). Confidence self-ratings related to self-monitoring responses were obtained. The analysis of self-monitoring revealed that both UHR and FEP groups misattributed imagined actions as being performed (i.e., self-monitoring errors) significantly more often than the CON group. There were no differences regarding performed actions as being imagined. UHR and FEP groups made their false responses with higher confidence in their judgments than the CON group. There were no group differences regarding discrimination between the types of actions presented (verbal vs non-verbal). A specific type of self-monitoring bias (i.e., misattributing imagined actions with performed actions), accompanied by high confidence in this judgment, may be a risk factor for the subsequent development of a psychotic disorder. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Crack Growth Properties of Sealing Glasses
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Tandon, R.
2008-01-01
The crack growth properties of several sealing glasses were measured using constant stress rate testing in 2% and 95% RH (relative humidity). Crack growth parameters measured in high humidity are systematically smaller (n and B) than those measured in low humidity, and velocities for dry environments are approx. 100x lower than for wet environments. The crack velocity is very sensitivity to small changes in RH at low RH. Confidence intervals on parameters that were estimated from propagation of errors were comparable to those from Monte Carlo simulation.
Adoption of electronic health records and barriers
Palabindala, Venkataraman; Pamarthy, Amaleswari; Jonnalagadda, Nageshwar Reddy
2016-01-01
Electronic health records (EHR) are not a new idea in the U.S. medical system, but surprisingly there has been very slow adoption of fully integrated EHR systems in practice in both primary care settings and within hospitals. For those who have invested in EHR, physicians report high levels of satisfaction and confidence in the reliability of their system. There is also consensus that EHR can improve patient care, promote safe practice, and enhance communication between patients and multiple providers, reducing the risk of error. As EHR implementation continues in hospitals, administrative and physician leadership must actively investigate all of the potential risks for medical error, system failure, and legal responsibility before moving forward. Ensuring that physicians are aware of their responsibilities in relation to their charting practices and the depth of information available within an EHR system is crucial for minimizing the risk of malpractice and lawsuit. Hospitals must commit to regular system upgrading and corresponding training for all users to reduce the risk of error and adverse events. PMID:27802857
The Influence of Training Phase on Error of Measurement in Jump Performance.
Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B
2016-03-01
The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.
Laurent, Alexandra; Aubert, Laurence; Chahraoui, Khadija; Bioy, Antoine; Mariage, André; Quenot, Jean-Pierre; Capellier, Gilles
2014-11-01
To identify the psychological repercussions of an error on professionals in intensive care and to understand their evolution. To identify the psychological defense mechanisms used by professionals to cope with error. Qualitative study with clinical interviews. We transcribed recordings and analysed the data using an interpretative phenomenological analysis. Two ICUs in the teaching hospitals of Besançon and Dijon (France). Fourteen professionals in intensive care (20 physicians and 20 nurses). None. We conducted 40 individual semistructured interviews. The participants were invited to speak about the experience of error in ICU. The interviews were transcribed and analyzed thematically by three experts. In the month following the error, the professionals described feelings of guilt (53.8%) and shame (42.5%). These feelings were associated with anxiety states with rumination (37.5%) and fear for the patient (23%); a loss of confidence (32.5%); an inability to verbalize one's error (22.5%); questioning oneself at a professional level (20%); and anger toward the team (15%). In the long term, the error remains fixed in memory for many of the subjects (80%); on one hand, for 72.5%, it was associated with an increase in vigilance and verifications in their professional practice, and on the other hand, for three professionals, it was associated with a loss of confidence. Finally, three professionals felt guilt which still persisted at the time of the interview. We also observed different defense mechanisms implemented by the professional to fight against the emotional load inherent in the error: verbalization (70%), developing skills and knowledge (43%), rejecting responsibility (32.5%), and avoidance (23%). We also observed a minimization (60%) of the error during the interviews. It is important to take into account the psychological experience of error and the defense mechanisms developed following an error because they appear to determine the professional's capacity to acknowledge and disclose his/her error and to learn from it.
Predictability of Bristol Bay, Alaska, sockeye salmon returns one to four years in the future
Adkison, Milo D.; Peterson, R.M.
2000-01-01
Historically, forecast error for returns of sockeye salmon Oncorhynchus nerka to Bristol Bay, Alaska, has been large. Using cross-validation forecast error as our criterion, we selected forecast models for each of the nine principal Bristol Bay drainages. Competing forecast models included stock-recruitment relationships, environmental variables, prior returns of siblings, or combinations of these predictors. For most stocks, we found prior returns of siblings to be the best single predictor of returns; however, forecast accuracy was low even when multiple predictors were considered. For a typical drainage, an 80% confidence interval ranged from one half to double the point forecast. These confidence intervals appeared to be appropriately wide.
Consistent Tolerance Bounds for Statistical Distributions
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Assumption that sample comes from population with particular distribution is made with confidence C if data lie between certain bounds. These "confidence bounds" depend on C and assumption about distribution of sampling errors around regression line. Graphical test criteria using tolerance bounds are applied in industry where statistical analysis influences product development and use. Applied to evaluate equipment life.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure
ERIC Educational Resources Information Center
Padilla, Miguel A.; Veprinsky, Anna
2012-01-01
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Patient safety awareness among Undergraduate Medical Students in Pakistani Medical School
Kamran, Rizwana; Bari, Attia; Khan, Rehan Ahmed; Al-Eraky, Mohamed
2018-01-01
Objective: To measure the level of awareness of patient safety among undergraduate medical students in Pakistani Medical School and to find the difference with respect to gender and prior experience with medical error. Methods: This cross-sectional study was conducted at the University of Lahore (UOL), Pakistan from January to March 2017, and comprised final year medical students. Data was collected using a questionnaire ‘APSQ- III’ on 7 point Likert scale. Eight questions were reverse coded. Survey was anonymous. SPSS package 20 was used for statistical analysis. Results: Questionnaire was filled by 122 students, with 81% response rate. The best score 6.17 was given for the ‘team functioning’, followed by 6.04 for ‘long working hours as a cause of medical error’. The domains regarding involvement of patient, confidence to report medical errors and role of training and learning on patient safety scored high in the agreed range of >5. Reverse coded questions about ‘professional incompetence as an error cause’ and ‘disclosure of errors’ showed negative perception. No significant differences of perceptions were found with respect to gender and prior experience with medical error (p= >0.05). Conclusion: Undergraduate medical students at UOL had a positive attitude towards patient safety. However, there were misconceptions about causes of medical errors and error disclosure among students and patient safety education needs to be incorporated in medical curriculum of Pakistan. PMID:29805398
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Too good to be true: when overwhelming evidence fails to convince.
Gunn, Lachlan J; Chapeau-Blondeau, François; McDonnell, Mark D; Davis, Bruce R; Allison, Andrew; Abbott, Derek
2016-03-01
Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith; however, rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence and (iii) cryptographic primality testing. In this paper, we investigate the effects of small error rates in a set of measurements or observations. We find that even with very low systemic failure rates, high confidence is surprisingly difficult to achieve; in particular, we find that certain analyses of cryptographically important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of 2 80 .
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
ALLTEM UXO detection and discrimination
Asch, T.H.; Wright, D.L.; Moulton, C.W.; Irons, T.P.; Nabighian, M.N.
2008-01-01
ALLTEM is a multi-axis electromagnetic induction system designed for unexploded ordnance (UXO) applications. It uses a continuous triangle-wave excitation and provides good late-time signal-to-noise ratio (SNR) especially for ferrous targets. Multi-axis transmitter (Tx) and receiver (Rx) systems such as ALLTEM provide a richer data set from which to invert for the target parameters required to distinguish between clutter and UXO. Inversions of field data over the Army's UXO Calibration Grid and Blind Test Grid at the Yuma Proving Ground (YPG), Arizona in 2006 produced polarizability moment values for many buried UXO items that were reasonable and generally repeatable for targets of the same type buried at different orientations and depths. In 2007 a test stand was constructed that allows for collection of data with varying spatial data density and accurate automated position control. The behavior of inverted ALLTEM test stand data as a function of spatial data density, sensor SNR, and position error has been investigated. The results indicate that the ALLTEM inversion algorithm is more tolerant of sensor noise and position error than has been reported for single-axis systems. A high confidence level in inversion-derived target parameters is required when a target is declared to be harmless scrap metal that may safely be left in the ground. Unless high confidence can be demonstrated, state regulators will likely require that targets be dug regardless of any "no-dig" classifications produced from inversions, in which case remediation costs would not be decreased.
ERIC Educational Resources Information Center
Grandjean, Burke D.; Taylor, Patricia A.; Weiner, Jay
2002-01-01
During the women's all-around gymnastics final at the 2000 Olympics, the vault was inadvertently set 5 cm too low for a random half of the gymnasts. The error was widely viewed as undermining their confidence and subsequent performance. However, data from pretest and posttest scores on the vault, bars, beam, and floor indicated that the vault…
Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods
ERIC Educational Resources Information Center
MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason
2004-01-01
The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…
The Dark and Bloody Mystery: Building Basic Writers' Confidence.
ERIC Educational Resources Information Center
Sledd, Robert
While the roots of students' fear of writing go deep, students fear most the surface of writing. They fear that a person's language indicates the state not only of the mind but of the soul--thus their writing can make them look stupid and morally depraved. This fear of error and lack of confidence prevent students from developing a command of the…
Simultaneous Inference For The Mean Function Based on Dense Functional Data
Cao, Guanqun; Yang, Lijian; Todem, David
2012-01-01
A polynomial spline estimator is proposed for the mean function of dense functional data together with a simultaneous confidence band which is asymptotically correct. In addition, the spline estimator and its accompanying confidence band enjoy oracle efficiency in the sense that they are asymptotically the same as if all random trajectories are observed entirely and without errors. The confidence band is also extended to the difference of mean functions of two populations of functional data. Simulation experiments provide strong evidence that corroborates the asymptotic theory while computing is efficient. The confidence band procedure is illustrated by analyzing the near infrared spectroscopy data. PMID:22665964
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Christensen, D.H.
1980-09-01
Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less
POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models
Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.
2014-01-01
The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
Classroom discipline skills and disruption rate: A correlational study
NASA Astrophysics Data System (ADS)
Dropik, Melonie Jane
Very little has been done to quantify the relationship between the frequency with which teachers use discipline skills and disruption rate in high school settings. Most of the available research that examined this relationship empirically was done in elementary schools, while a few studies examined the junior high school setting. The present research examined whether the use of ten specific discipline skills were related to the rate of disruption in suburban high school science classrooms. The ten skills were selected based on their prevalence in the theoretical literature and the strength of the relationships reported in empirical studies of elementary and junior high classrooms. Each relationship was tested directionally at alpha = .01. The maximum experimentwise Type I error rate was .10. Disruption rate was measured by trained observers over five class periods in the Fall of the school year. The frequency of performing the ten skills was assessed using a student survey developed for this study. The ten skills were: (1) beginning class on time, (2) using routines, (3) waiting for student attention before speaking, (4) giving clear directions, (5) presenting material fast enough to hold students' attention, (6) requiring students to remain seated, (7) appearing confident, (8) stopping misbehavior quickly, (9) checking for student attentiveness, and (10) teaching to the bell. Appearing confident (r = --.697, p = .004) and quickly stopping misbehavior (r = --.709, p = .003) were significantly negatively related to disruption rate. The effect sizes for the confidence and stopping misbehavior variables were .49 and .50, respectively. At least half of the variation in disruption rate was attributable to the difference in the frequency of appearing confident and stopping misbehavior quickly. The eight other relationships produced nonsignificant results. The results raise questions about whether theories developed from observational and anecdotal evidence gathered in elementary or junior high school classrooms can be applied to high school classrooms and indicate that further investigation into the high school setting is necessary.
Accommodative Performance of Children With Unilateral Amblyopia
Manh, Vivian; Chen, Angela M.; Tarczy-Hornoch, Kristina; Cotter, Susan A.; Candy, T. Rowan
2015-01-01
Purpose. The purpose of this study was to compare the accommodative performance of the amblyopic eye of children with unilateral amblyopia to that of their nonamblyopic eye, and also to that of children without amblyopia, during both monocular and binocular viewing. Methods. Modified Nott retinoscopy was used to measure accommodative performance of 38 subjects with unilateral amblyopia and 25 subjects with typical vision from 3 to 13 years of age during monocular and binocular viewing at target distances of 50, 33, and 25 cm. The relationship between accommodative demand and interocular difference (IOD) in accommodative error was assessed in each group. Results. The mean IOD in monocular accommodative error for amblyopic subjects across all three viewing distances was 0.49 diopters (D) (95% confidence interval [CI], ±1.12 D) in the 180° meridian and 0.54 D (95% CI, ±1.27 D) in the 90° meridian, with the amblyopic eye exhibiting greater accommodative errors on average. Interocular difference in monocular accommodative error increased significantly with increasing accommodative demand; 5%, 47%, and 58% of amblyopic subjects had monocular errors in the amblyopic eye that fell outside the upper 95% confidence limit for the better eye of control subjects at viewing distances of 50, 33, and 25 cm, respectively. Conclusions. When viewing monocularly, children with unilateral amblyopia had greater mean accommodative errors in their amblyopic eyes than in their nonamblyopic eyes, and when compared with control subjects. This could lead to unintended retinal image defocus during patching therapy for amblyopia. PMID:25626970
Refractive errors in a rural Korean adult population: the Namil Study.
Yoo, Y C; Kim, J M; Park, K H; Kim, C Y; Kim, T-W
2013-12-01
To assess the prevalence of refractive errors, including myopia, high myopia, hyperopia, astigmatism, and anisometropia, in rural adult Koreans. We identified 2027 residents aged 40 years or older in Namil-myeon, a rural town in central South Korea. Of 1928 eligible residents, 1532 subjects (79.5%) participated. Each subject underwent screening examinations including autorefractometry, corneal curvature measurement, and best-corrected visual acuity. Data from 1215 phakic right eyes were analyzed. The prevalence of myopia (spherical equivalent (SE) <-0.5 diopters (D)) was 20.5% (95% confidence interval (CI): 18.2-22.8%), of high myopia (SE <-6.0 D) was 1.0% (95% CI: 0.4-1.5%), of hyperopia (SE>+0.5 D) was 41.8% (95% CI: 38.9-44.4%), of astigmatism (cylinder <-0.5 D) was 63.7% (95% CI: 61.0-66.4%), and of anisometropia (difference in SE between eyes >1.0 D) was 13.8% (95% CI: 11.9-15.8%). Myopia prevalence decreased with age and tended to transition into hyperopia with age up to 60-69 years. In subjects older than this, the trend in SE refractive errors reversed with age. The prevalence of astigmatism and anisometropia increased consistently with age. The refractive status was not significantly different between males and females. The prevalence of myopia and hyperopia in rural adult Koreans was similar to that of rural Chinese. The prevalence of high myopia was lower in this Korean sample than in other East Asian populations, and astigmatism was the most frequently occurring refractive error.
McQueen, Daniel S; Begg, Michael J; Maxwell, Simon R J
2010-10-01
Dose calculation errors can cause serious life-threatening clinical incidents. We designed eDrugCalc as an online self-assessment tool to develop and evaluate calculation skills among medical students. We undertook a prospective uncontrolled study involving 1727 medical students in years 1-5 at the University of Edinburgh. Students had continuous access to eDrugCalc and were encouraged to practise. Voluntary self-assessment was undertaken by answering the 20 questions on six occasions over 30 months. Questions remained fixed but numerical variables changed so each visit required a fresh calculation. Feedback was provided following each answer. Final-year students had a significantly higher mean score in test 6 compared with test 1 [16.6, 95% confidence interval (CI) 16.2, 17.0 vs. 12.6, 95% CI 11.9, 13.4; n= 173, P < 0.0001 Wilcoxon matched pairs test] and made a median of three vs. seven errors. Performance was highly variable in all tests with 2.7% of final-year students scoring < 10/20 in test 6. Graduating students in 2009 (30 months' exposure) achieved significantly better scores than those in 2007 (only 6 months): mean 16.5, 95% CI 16.0, 17.0, n= 184 vs. 15.1, 95% CI 14.5, 15.6, n= 187; P < 0.0001, Mann-Whitney test. Calculations based on percentage concentrations and infusion rates were poorly performed. Feedback showed that eDrugCalc increased confidence in calculating doses and was highly rated as a learning tool. Medical student performance of dose calculations improved significantly after repeated exposure to an online formative dose-calculation package and encouragement to develop their numeracy. Further research is required to establish whether eDrugCalc reduces calculation errors made in clinical practice. © 2010 The Authors. British Journal of Clinical Pharmacology © 2010 The British Pharmacological Society.
Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors
NASA Astrophysics Data System (ADS)
Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.
2014-07-01
The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.
Medication administration errors in nursing homes using an automated medication dispensing system.
van den Bemt, Patricia M L A; Idzinga, Jetske C; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
OBJECTIVE To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. DESIGN The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. MEASUREMENTS Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. RESULTS In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05-1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66-46.50), medication crushed (OR 7.83; 95% CI 5.40-11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01-1.05), nursing home 2 (OR 3.97; 95% CI 2.86-5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04-4.18), time classes "7-10 am" (OR 2.28; 95% CI 1.50-3.47) and "10 am-2 pm" (OR 1.96; 1.18-3.27) and day of the week "Wednesday" (OR 1.46; 95% CI 1.03-2.07) are associated with a higher risk of administration errors. CONCLUSIONS Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload.
An Analysis of Operational Suitability for Test and Evaluation of Highly Reliable Systems
1994-03-04
Exposition," Journal of the American Statistical A iation-59: 353-375 (June 1964). 17. SYS 229, Test and Evaluation Management Coursebook , School of Systems...in hours, 0 is 2-5 the desired MTBCF in hours, R is the number of critical failures, and a is the P[type-I error] of the X2 statistic with 2*R+2...design of experiments (DOE) tables and the use of Bayesian statistics to increase the confidence level of the test results that will be obtained from
Determination of the mean solid-liquid interface energy of pivalic acid
NASA Technical Reports Server (NTRS)
Singh, N. B.; Gliksman, M. E.
1989-01-01
A high-confidence solid-liquid interfacial energy is determined for an anisotropic material. A coaxial composite having a cylindrical specimen chamber geometry provides a thermal gradient with an axial heating wire. The surface energy is derived from measurements of grain boundary groove shapes. Applying this method to pivalic acid, a surface energy of 2.84 erg/sq cm was determined with a total systematic and random error less than 10 percent. The value of interfacial energy corresponds to 24 percent of the latent heat of fusion per molecule.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Farwell, Lawrence A; Richardson, Drew C; Richardson, Graham M
2013-08-01
Brain fingerprinting detects concealed information stored in the brain by measuring brainwave responses. We compared P300 and P300-MERMER event-related brain potentials for error rate/accuracy and statistical confidence in four field/real-life studies. 76 tests detected presence or absence of information regarding (1) real-life events including felony crimes; (2) real crimes with substantial consequences (either a judicial outcome, i.e., evidence admitted in court, or a $100,000 reward for beating the test); (3) knowledge unique to FBI agents; and (4) knowledge unique to explosives (EOD/IED) experts. With both P300 and P300-MERMER, error rate was 0 %: determinations were 100 % accurate, no false negatives or false positives; also no indeterminates. Countermeasures had no effect. Median statistical confidence for determinations was 99.9 % with P300-MERMER and 99.6 % with P300. Brain fingerprinting methods and scientific standards for laboratory and field applications are discussed. Major differences in methods that produce different results are identified. Markedly different methods in other studies have produced over 10 times higher error rates and markedly lower statistical confidences than those of these, our previous studies, and independent replications. Data support the hypothesis that accuracy, reliability, and validity depend on following the brain fingerprinting scientific standards outlined herein.
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2006-01-01
Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…
Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan
2004-11-01
Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.
Prenatal Depressive Symptoms and Postpartum Sexual Risk Among Young Urban Women of Color.
Cunningham, S D; Smith, A; Kershaw, T; Lewis, J B; Cassells, A; Tobin, J N; Ickovics, J R
2016-02-01
To determine whether prenatal depressive symptoms are associated with postpartum sexual risk among young, urban women of color. Participants completed surveys during their second trimester of pregnancy and at 1 year postpartum. Depressive symptoms were measured using the Center for Epidemiologic Studies-Depression Scale, excluding somatic items because women were pregnant. Logistic and linear regression models adjusted for known predictors of sexual risk and baseline outcome variables were used to assess whether prenatal depressive symptoms make an independent contribution to sexual risk over time. Fourteen community health centers and hospitals in New York City. The participants included 757 predominantly black and Latina (91%, n = 692) pregnant teens and young women aged 14-21 years. The main outcome measures were number of sex partners, condom use, exposure to high-risk sex partners, diagnosis of a sexually transmitted disease, and repeat pregnancy. High levels of prenatal depressive symptoms were significantly associated with increased number of sex partners (β = 0.17; standard error, 0.08), decreased condom use (β = -7.16; standard error, 3.08), and greater likelihood of having had sex with a high-risk partner (odds ratio = 1.84; 95% confidence interval, 1.26-2.70), and repeat pregnancy (odds ratio = 1.72; 95% confidence interval, 1.09-2.72), among participants who were sexually active (all P < .05). Prenatal depressive symptoms were not associated with whether participants engaged in postpartum sexual activity or sexually transmitted disease incidence. Screening and treatment for depression should be available routinely to women at risk for antenatal depression. Copyright © 2016 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Kuang, T-M; Tsai, S-Y; Liu, C J-L; Ko, Y-C; Lee, S-M; Chou, P
2016-01-01
Purpose To report the 7-year incidence of uncorrected refractive error in a metropolitan Chinese elderly population. Methods The Shihpai Eye Study 2006 included 460/824 (55.8%) subjects (age range 72–94 years old) of 1361 participants in the 1999 baseline survey for a follow-up eye examination. Visual acuity was assessed using a Snellen chart, uncorrected refractive error was defined as presenting visual acuity (naked eye if without spectacles and with distance spectacles if worn) in the better eye of <6/12 that improved to no impairment (≥6/12) after refractive correction. Results The 7-year incidence of uncorrected refractive error was 10.5% (95% confidence interval (CI): 7.6–13.4%). 92.7% of participants with uncorrection and 77.8% with undercorrection were able to improve at least two lines of visual acuity by refractive correction. In multivariate analysis controlling for covariates, uncorrected refractive error was significantly related to myopia (relative risk (RR): 3.15; 95% CI: 1.31–7.58) and living alone (RR: 2.94; 95% CI 1.14–7.53), whereas distance spectacles worn during examination was protective (RR: 0.35; 95% CI: 0.14–0.88). Conclusion Our study indicated that the incidence of uncorrected refractive error was high (10.5%) in this elderly Chinese population. Living alone and myopia are predisposing factors, whereas wearing distance spectacles at examination is protective. PMID:26795416
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
Ultrasound transducer function: annual testing is not sufficient.
Mårtensson, Mattias; Olsson, Mats; Brodin, Lars-Åke
2010-10-01
The objective was to follow-up the study 'High incidence of defective ultrasound transducers in use in routine clinical practice' and evaluate if annual testing is good enough to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level. A total of 299 transducers were tested in 13 clinics at five hospitals in the Stockholm area. Approximately 7000-15,000 ultrasound examinations are carried out at these clinics every year. The transducers tested in the study had been tested and classified as fully operational 1 year before and since then been in normal use in the routine clinical practice. The transducers were tested with the Sonora FirstCall Test System. There were 81 (27.1%) defective transducers found; giving a 95% confidence interval ranging from 22.1 to 32.1%. The most common transducer errors were 'delamination' of the ultrasound lens and 'break in the cable' which together constituted 82.7% of all transducer errors found. The highest error rate was found at the radiological clinics with a mean error rate of 36.0%. There was a significant difference in error rate between two observed ways the clinics handled the transducers. There was no significant difference in the error rates of the transducer brands or the transducers models. Annual testing is not sufficient to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level and it is strongly advisable to create a user routine that minimizes the handling of the transducers.
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...
2017-02-15
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Creating illusions of knowledge: learning errors that contradict prior knowledge.
Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J
2013-02-01
Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter
2017-01-01
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.
Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María
2017-08-01
In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.
Refractive Errors in Koreans: The Korea National Health and Nutrition Examination Survey 2008-2012.
Rim, Tyler Hyungtaek; Kim, Seung-Hyun; Lim, Key Hwan; Choi, Moonjung; Kim, Hye Young; Baek, Seung-Hee
2016-06-01
Our study provides epidemiologic data on the prevalence of refractive errors in all age group ≥5 years in Korea. In 2008 to 2012, a total of 33,355 participants aged ≥5 years underwent ophthalmologic examinations. Using the right eye, myopia was defined as a spherical equivalent (SE) less than -0.5 or -1.0 diopters (D) in subjects aged 19 years and older or as an SE less than -0.75 or -1.25 D in subjects aged 5 to 18 years according to non-cycloplegic refraction. Other refractive errors were defined as follows: high myopia as an SE less than -6.0 D; hyperopia as an SE larger than +0.5 D; and astigmatism as a cylindrical error less than -1.0 D. The prevalence and risk factors of myopia were evaluated. Prevalence rates with a 95% confidence interval were determined for myopia (SE <-0.5 D, 51.9% [51.2 to 52.7]; SE <-1.0 D, 39.6% [38.8 to 40.3]), high myopia (5.0% [4.7 to 5.3]), hyperopia (13.4% [12.9 to 13.9]), and astigmatism (31.2% [30.5 to 32.0]). The prevalence of myopia demonstrated a nonlinear distribution with the highest peak between the ages of 19 and 29 years. The prevalence of hyperopia decreased with age in subjects aged 39 years or younger and then increased with age in subjects aged 40 years or older. The prevalence of astigmatism gradually increased with age. Education was associated with all refractive errors; myopia was more prevalent and hyperopia and astigmatism were less prevalent in the highly educated groups. In young generations, the prevalence of myopia in Korea was much higher compared to the white or black populations in Western countries and is consistent with the high prevalence found in most other Asian countries. The overall prevalence of hyperopia was much lower compared to that of the white Western population. Age and education level were significant predictive factors associated with all kinds of refractive errors.
Refractive Errors in Koreans: The Korea National Health and Nutrition Examination Survey 2008-2012
Rim, Tyler Hyungtaek; Kim, Seung-Hyun; Lim, Key Hwan; Choi, Moonjung
2016-01-01
Purpose Our study provides epidemiologic data on the prevalence of refractive errors in all age group ≥5 years in Korea. Methods In 2008 to 2012, a total of 33,355 participants aged ≥5 years underwent ophthalmologic examinations. Using the right eye, myopia was defined as a spherical equivalent (SE) less than -0.5 or -1.0 diopters (D) in subjects aged 19 years and older or as an SE less than -0.75 or -1.25 D in subjects aged 5 to 18 years according to non-cycloplegic refraction. Other refractive errors were defined as follows: high myopia as an SE less than -6.0 D; hyperopia as an SE larger than +0.5 D; and astigmatism as a cylindrical error less than -1.0 D. The prevalence and risk factors of myopia were evaluated. Results Prevalence rates with a 95% confidence interval were determined for myopia (SE <-0.5 D, 51.9% [51.2 to 52.7]; SE <-1.0 D, 39.6% [38.8 to 40.3]), high myopia (5.0% [4.7 to 5.3]), hyperopia (13.4% [12.9 to 13.9]), and astigmatism (31.2% [30.5 to 32.0]). The prevalence of myopia demonstrated a nonlinear distribution with the highest peak between the ages of 19 and 29 years. The prevalence of hyperopia decreased with age in subjects aged 39 years or younger and then increased with age in subjects aged 40 years or older. The prevalence of astigmatism gradually increased with age. Education was associated with all refractive errors; myopia was more prevalent and hyperopia and astigmatism were less prevalent in the highly educated groups. Conclusions In young generations, the prevalence of myopia in Korea was much higher compared to the white or black populations in Western countries and is consistent with the high prevalence found in most other Asian countries. The overall prevalence of hyperopia was much lower compared to that of the white Western population. Age and education level were significant predictive factors associated with all kinds of refractive errors. PMID:27247521
Rank score and permutation testing alternatives for regression quantile estimates
Cade, B.S.; Richards, J.D.; Mielke, P.W.
2006-01-01
Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.
Investigation of several aspects of LANDSAT-4 data quality
NASA Technical Reports Server (NTRS)
Wrigley, R. C. (Principal Investigator)
1983-01-01
No insurmountable problems in change detection analysis were found when portions of scenes collected simultaneously by LANDSAT 4 MSS and either LANDSAT 2 or 3. The cause of the periodic noise in LANDSAT 4 MSS images which had a RMS value of approximately 2DN should be corrected in the LANDSAT D instrument before its launch. Analysis of the P-tape of the Arkansas scene shows bands within the same focal plane very well registered except for the thermal band which was misregistered by approximately three 28.5 meter pixels in both directions. It is possible to derive tight confidence bounds for the registration errors. Preliminary analyses of the Sacramento and Arkansas scenes reveals a very high degree of consistency with earlier results for bands 3 vs 1, 3 vs 4, and 3 vs 5. Results are presented in table form. It is suggested that attention be given to the standard deviations of registrations errors to judge whether or not they will be within specification once any known mean registration errors are corrected. Techniques used for MTF analysis of a Washington scene produced noisy results.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.
Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria
2010-08-06
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Bao, Yi; Chen, Yizheng; Hoehler, Matthew S; Smith, Christopher M; Bundy, Matthew; Chen, Genda
2017-01-01
This paper presents high temperature measurements using a Brillouin scattering-based fiber optic sensor and the application of the measured temperatures and building code recommended material parameters into enhanced thermomechanical analysis of simply supported steel beams subjected to combined thermal and mechanical loading. The distributed temperature sensor captures detailed, nonuniform temperature distributions that are compared locally with thermocouple measurements with less than 4.7% average difference at 95% confidence level. The simulated strains and deflections are validated using measurements from a second distributed fiber optic (strain) sensor and two linear potentiometers, respectively. The results demonstrate that the temperature-dependent material properties specified in the four investigated building codes lead to strain predictions with less than 13% average error at 95% confidence level and that the Europe building code provided the best predictions. However, the implicit consideration of creep in Europe is insufficient when the beam temperature exceeds 800°C.
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
Structure and kinematics of the broad-line regions in active galaxies from IUE variability data
NASA Technical Reports Server (NTRS)
Koratkar, Anuradha P.; Gaskell, C. Martin
1991-01-01
IUE archival data are used here to investigate the structure nad kinematics of the broad-line regions (BLRs) in nine AGN. It is found that the centroid of the line-continuum cross-correlation functions (CCFs) can be determined with reasonable reliability. The errors in BLR size estimates from CCFs for irregularly sampled light curves are fairly well understood. BLRs are found to have small luminosity-weighted radii, and lines of high ionization tend to be emitted closer to the central source than lines of low ionization, especially for low-luminosity objects. The motion of the gas is gravity-dominated with both pure inflow and pure outflow of high-velocity gas being excluded at a high confidence level for certain geometries.
Pediatric Anesthesiology Fellows' Perception of Quality of Attending Supervision and Medical Errors.
Benzon, Hubert A; Hajduk, John; De Oliveira, Gildasio; Suresh, Santhanam; Nizamuddin, Sarah L; McCarthy, Robert; Jagannathan, Narasimhan
2018-02-01
Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States. Interestingly, fellows' perception of quality of faculty supervision was not associated with the frequency of reported errors. The current results with a narrow CI suggest the need to evaluate other potential factors that can be associated with the high frequency of reported errors by pediatric fellows (eg, fatigue, burnout). The identification of factors that lead to medical errors by pediatric anesthesiology fellows should be a main research priority to improve both trainee education and best practices of pediatric anesthesia.
NASA Technical Reports Server (NTRS)
Fields, J. M.
1980-01-01
The data from seven surveys of community response to environmental noise are reanalyzed to assess the relative influence of peak noise levels and the numbers of noise events on human response. The surveys do not agree on the value of the tradeoff between the effects of noise level and numbers of events. The value of the tradeoff cannot be confidently specified in any survey because the tradeoff estimate may have a large standard error of estimate and because the tradeoff estimate may be seriously biased by unknown noise measurement errors. Some evidence suggests a decrease in annoyance with very high numbers of noise events but this evidence is not strong enough to lead to the rejection of the conventionally accepted assumption that annoyance is related to a log transformation of the number of noise events.
Adaptive control in the presence of unmodeled dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rohrs, C. E.
1982-01-01
Stability and robustness properties of a wide class of adaptive control algorithms in the presence of unmodeled dynamics and output disturbances were investigated. The class of adaptive algorithms considered are those commonly referred to as model reference adaptive control algorithms, self-tuning controllers, and dead beat adaptive controllers, developed for both continuous-time systems and discrete-time systems. A unified analytical approach was developed to examine the class of existing adaptive algorithms. It was discovered that all existing algorithms contain an infinite gain operator in the dynamic system that defines command reference errors and parameter errors; it is argued that such an infinite gain operator appears to be generic to all adaptive algorithms, whether they exhibit explicit or implicit parameter identification. It is concluded that none of the adaptive algorithms considered can be used with confidence in a practical control system design, because instability will set in with a high probability.
Approach to the Pediatric Prescription in a Community Pharmacy
Benavides, Sandra; Huynh, Donna; Morgan, Jill; Briars, Leslie
2011-01-01
Pediatric patients are more susceptible to medication errors for a variety of reasons including physical and social differences and the necessity for patient-specific dosing. As such, community pharmacists may feel uncomfortable in verifying or dispensing a prescription for a pediatric patient. However, the use of a systematic approach to the pediatric prescription can provide confidence to pharmacists and minimize the possibility of a medication error. The objective of this article is to provide the community pharmacist with an overview of the potential areas of medication errors in a prescription for a pediatric patient. Additionally, the article guides the community pharmacist through a pediatric prescription, highlighting common areas of medication errors. PMID:22768015
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.
2007-04-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
ERIC Educational Resources Information Center
Christ, Theodore J.
2006-01-01
Curriculum-based measurement of oral reading fluency (CBM-R) is an established procedure used to index the level and trend of student growth. A substantial literature base exists regarding best practices in the administration and interpretation of CBM-R; however, research has yet to adequately address the potential influence of measurement error.…
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.
Fleming, Stephen M; Daw, Nathaniel D
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960
Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System
NASA Technical Reports Server (NTRS)
Pfenninger, W. Matthew; Papen, George C.
1992-01-01
Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.
Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A
2016-09-01
The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Annoyance to Noise Produced by a Distributed Electric Propulsion High-Lift System
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Palumbo, Daniel L.; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem
2017-01-01
A psychoacoustic test was performed using simulated sounds from a distributed electric propulsion aircraft concept to help understand factors associated with human annoyance. A design space spanning the number of high-lift leading edge propellers and their relative operating speeds, inclusive of time varying effects associated with motor controller error and atmospheric turbulence, was considered. It was found that the mean annoyance response varies in a statistically significant manner with the number of propellers and with the inclusion of time varying effects, but does not differ significantly with the relative RPM between propellers. An annoyance model was developed, inclusive of confidence intervals, using the noise metrics of loudness, roughness, and tonality as predictors.
Refractive errors in a rural Korean adult population: the Namil Study
Yoo, Y C; Kim, J M; Park, K H; Kim, C Y; Kim, T-W
2013-01-01
Purpose To assess the prevalence of refractive errors, including myopia, high myopia, hyperopia, astigmatism, and anisometropia, in rural adult Koreans. Methods We identified 2027 residents aged 40 years or older in Namil-myeon, a rural town in central South Korea. Of 1928 eligible residents, 1532 subjects (79.5%) participated. Each subject underwent screening examinations including autorefractometry, corneal curvature measurement, and best-corrected visual acuity. Results Data from 1215 phakic right eyes were analyzed. The prevalence of myopia (spherical equivalent (SE) <−0.5 diopters (D)) was 20.5% (95% confidence interval (CI): 18.2−22.8%), of high myopia (SE <−6.0 D) was 1.0% (95% CI: 0.4−1.5%), of hyperopia (SE>+0.5 D) was 41.8% (95% CI: 38.9−44.4%), of astigmatism (cylinder <−0.5 D) was 63.7% (95% CI: 61.0−66.4%), and of anisometropia (difference in SE between eyes >1.0 D) was 13.8% (95% CI: 11.9−15.8%). Myopia prevalence decreased with age and tended to transition into hyperopia with age up to 60−69 years. In subjects older than this, the trend in SE refractive errors reversed with age. The prevalence of astigmatism and anisometropia increased consistently with age. The refractive status was not significantly different between males and females. Conclusions The prevalence of myopia and hyperopia in rural adult Koreans was similar to that of rural Chinese. The prevalence of high myopia was lower in this Korean sample than in other East Asian populations, and astigmatism was the most frequently occurring refractive error. PMID:24037232
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog and its supplement, this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
Scherer, Laura D; Yates, J Frank; Baker, S Glenn; Valentine, Kathrene D
2017-06-01
Human judgment often violates normative standards, and virtually no judgment error has received as much attention as the conjunction fallacy. Judgment errors have historically served as evidence for dual-process theories of reasoning, insofar as these errors are assumed to arise from reliance on a fast and intuitive mental process, and are corrected via effortful deliberative reasoning. In the present research, three experiments tested the notion that conjunction errors are reduced by effortful thought. Predictions based on three different dual-process theory perspectives were tested: lax monitoring, override failure, and the Tripartite Model. Results indicated that participants higher in numeracy were less likely to make conjunction errors, but this association only emerged when participants engaged in two-sided reasoning, as opposed to one-sided or no reasoning. Confidence was higher for incorrect as opposed to correct judgments, suggesting that participants were unaware of their errors.
NASA Technical Reports Server (NTRS)
Levy, G.; Brown, R. A.
1986-01-01
A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.
Single Platform Geolocation of Radio Frequency Emitters
2015-03-26
Error SNR Signal to Noise Ratio SOI Signal of Interest STK Systems Tool Kit UCA Uniform Circular Array WGS World Geodetic System xv SINGLE PLATFORM...Section 2.6 describes a method to visualize the confidence of estimated parameters. 2.1 Coordinate Systems and Reference Frames The following...be used to visualize the confidence surface using the method developed in Section 2.6. The NLO method will be shown to be the minimization of the
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.
Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-01-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649
Confidence sharing: an economic strategy for efficient information flows in animal groups.
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-10-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication.
Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter
2012-01-01
The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015
TH-AB-202-04: Auto-Adaptive Margin Generation for MLC-Tracked Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glitzner, M; Lagendijk, J; Raaymakers, B
Purpose: To develop an auto-adaptive margin generator for MLC tracking. The generator is able to estimate errors arising in image guided radiotherapy, particularly on an MR-Linac, which depend on the latencies of machine and image processing, as well as on patient motion characteristics. From the estimated error distribution, a segment margin is generated, able to compensate errors up to a user-defined confidence. Method: In every tracking control cycle (TCC, 40ms), the desired aperture D(t) is compared to the actual aperture A(t), a delayed and imperfect representation of D(t). Thus an error e(t)=A(T)-D(T) is measured every TCC. Applying kernel-density-estimation (KDE), themore » cumulative distribution (CDF) of e(t) is estimated. With CDF-confidence limits, upper and lower error limits are extracted for motion axes along and perpendicular leaf-travel direction and applied as margins. To test the dosimetric impact, two representative motion traces were extracted from fast liver-MRI (10Hz). The traces were applied onto a 4D-motion platform and continuously tracked by an Elekta Agility 160 MLC using an artificially imposed tracking delay. Gafchromic film was used to detect dose exposition for static, tracked, and error-compensated tracking cases. The margin generator was parameterized to cover 90% of all tracking errors. Dosimetric impact was rated by calculating the ratio between underexposed points (>5% underdosage) to the total number of points inside FWHM of static exposure. Results: Without imposing adaptive margins, tracking experiments showed a ratio of underexposed points of 17.5% and 14.3% for two motion cases with imaging delays of 200ms and 300ms, respectively. Activating the margin generated yielded total suppression (<1%) of underdosed points. Conclusion: We showed that auto-adaptive error compensation using machine error statistics is possible for MLC tracking. The error compensation margins are calculated on-line, without the need of assuming motion or machine models. Further strategies to reduce consequential overdosages are currently under investigation. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less
GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.
2009-01-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
The prevalence of uncorrected refractive errors in underserved rural areas.
Hashemi, Hassan; Abbastabar, Hedayat; Yekta, Abbasali; Heydarian, Samira; Khabazkhoob, Mehdi
2017-12-01
To determine the prevalence of uncorrected refractive errors, need for spectacles, and the determinants of unmet need in underserved rural areas of Iran. In a cross-sectional study, multistage cluster sampling was done in 2 underserved rural areas of Iran. Then, all subjects underwent vision testing and ophthalmic examinations including the measurement of uncorrected visual acuity (UCVA), best corrected visual acuity, visual acuity with current spectacles, auto-refraction, retinoscopy, and subjective refraction. Need for spectacles was defined as UCVA worse than 20/40 in the better eye that could be corrected to better than 20/40 with suitable spectacles. Of the 3851 selected individuals, 3314 participated in the study. Among participants, 18.94% [95% confidence intervals (CI): 13.48-24.39] needed spectacles and 11.23% (95% CI: 7.57-14.89) had an unmet need. The prevalence of need for spectacles was 46.8% and 23.8% in myopic and hyperopic participants, respectively. The prevalence of unmet need was 27% in myopic, 15.8% in hyperopic, and 25.46% in astigmatic participants. Multiple logistic regression showed that education and type of refractive errors were associated with uncorrected refractive errors; the odds of uncorrected refractive errors were highest in illiterate participants, and the odds of unmet need were 12.13, 5.1, and 4.92 times higher in myopic, hyperopic and astigmatic participants as compared with emmetropic individuals. The prevalence of uncorrected refractive errors was rather high in our study. Since rural areas have less access to health care facilities, special attention to the correction of refractive errors in these areas, especially with inexpensive methods like spectacles, can prevent a major proportion of visual impairment.
Nakano, Tadashi; Hayashi, Takeshi; Nakagawa, Toru; Honda, Toru; Owada, Satoshi; Endo, Hitoshi; Tatemichi, Masayuki
2018-04-05
This retrospective cohort study primarily aimed to investigate the possible association of computer use with visual field abnormalities (VFA) among Japanese workers. The study included 2,377 workers (mean age 45.7 [standard deviation, 8.3] years; 2,229 men and 148 women) who initially exhibited no VFA during frequency doubling technology perimetry (FDT) testing. Subjects then underwent annual follow-up FDT testing for 7 years, and VFA were determined using a FDT-test protocol (FDT-VFA). Subjects with FDT-VFA were examined by ophthalmologists. Baseline data about the mean duration of computer use during a 5-year period and refractive errors were obtained via self-administered questionnaire and evaluations for refractive errors (use of eyeglasses or contact lenses), respectively. A Cox proportional hazard analysis demonstrated that heavy computer users (>8 hr/day) had a significantly increased risk of FDT-VFA (hazard ratio [HR] 2.85; 95% confidence interval [CI], 1.26-6.48) relative to light users (<4 hr/day), and this association was strengthened among subjects with refractive errors (HR 4.48; 95% CI, 1.87-10.74). The computer usage history also significantly correlated with FDT-VFA among subject with refractive errors (P < 0.05), and 73.1% of subjects with FDT-VFA and refractive errors were diagnosed with glaucoma or ocular hypertension. The incidence of FDT-VFA appears to be increased among Japanese workers who are heavy computer users, particularly if they have refractive errors. Further investigations of epidemiology and causality are warranted.
2013-01-01
Background The production of multiple transcript isoforms from one gene is a major source of transcriptome complexity. RNA-Seq experiments, in which transcripts are converted to cDNA and sequenced, allow the resolution and quantification of alternative transcript isoforms. However, methods to analyze splicing are underdeveloped and errors resulting in incorrect splicing calls occur in every experiment. Results We used RNA-Seq data to develop sequencing and aligner error models. By applying these error models to known input from simulations, we found that errors result from false alignment to minor splice motifs and antisense stands, shifted junction positions, paralog joining, and repeat induced gaps. By using a series of quantitative and qualitative filters, we eliminated diagnosed errors in the simulation, and applied this to RNA-Seq data from Drosophila melanogaster heads. We used high-confidence junction detections to specifically interrogate local splicing differences between transcripts. This method out-performed commonly used RNA-seq methods to identify known alternative splicing events in the Drosophila sex determination pathway. We describe a flexible software package to perform these tasks called Splicing Analysis Kit (Spanki), available at http://www.cbcb.umd.edu/software/spanki. Conclusions Splice-junction centric analysis of RNA-Seq data provides advantages in specificity for detection of alternative splicing. Our software provides tools to better understand error profiles in RNA-Seq data and improve inference from this new technology. The splice-junction centric approach that this software enables will provide more accurate estimates of differentially regulated splicing than current tools. PMID:24209455
Sturgill, David; Malone, John H; Sun, Xia; Smith, Harold E; Rabinow, Leonard; Samson, Marie-Laure; Oliver, Brian
2013-11-09
The production of multiple transcript isoforms from one gene is a major source of transcriptome complexity. RNA-Seq experiments, in which transcripts are converted to cDNA and sequenced, allow the resolution and quantification of alternative transcript isoforms. However, methods to analyze splicing are underdeveloped and errors resulting in incorrect splicing calls occur in every experiment. We used RNA-Seq data to develop sequencing and aligner error models. By applying these error models to known input from simulations, we found that errors result from false alignment to minor splice motifs and antisense stands, shifted junction positions, paralog joining, and repeat induced gaps. By using a series of quantitative and qualitative filters, we eliminated diagnosed errors in the simulation, and applied this to RNA-Seq data from Drosophila melanogaster heads. We used high-confidence junction detections to specifically interrogate local splicing differences between transcripts. This method out-performed commonly used RNA-seq methods to identify known alternative splicing events in the Drosophila sex determination pathway. We describe a flexible software package to perform these tasks called Splicing Analysis Kit (Spanki), available at http://www.cbcb.umd.edu/software/spanki. Splice-junction centric analysis of RNA-Seq data provides advantages in specificity for detection of alternative splicing. Our software provides tools to better understand error profiles in RNA-Seq data and improve inference from this new technology. The splice-junction centric approach that this software enables will provide more accurate estimates of differentially regulated splicing than current tools.
Refractive Error in a Sample of Black High School Children in South Africa.
Wajuihian, Samuel Otabor; Hansraj, Rekha
2017-12-01
This study focused on a cohort that has not been studied and who currently have limited access to eye care services. The findings, while improving the understanding of the distribution of refractive errors, also enabled identification of children requiring intervention and provided a guide for future resource allocation. The aim of conducting the study was to determine the prevalence and distribution of refractive error and its association with gender, age, and school grade level. Using a multistage random cluster sampling, 1586 children, 632 males (40%) and 954 females (60%), were selected. Their ages ranged between 13 and 18 years with a mean of 15.81 ± 1.56 years. The visual functions evaluated included visual acuity using the logarithm of minimum angle of resolution chart and refractive error measured using the autorefractor and then refined subjectively. Axis astigmatism was presented in the vector method where positive values of J0 indicated with-the-rule astigmatism, negative values indicated against-the-rule astigmatism, whereas J45 represented oblique astigmatism. Overall, patients were myopic with a mean spherical power for right eye of -0.02 ± 0.47; mean astigmatic cylinder power was -0.09 ± 0.27 with mainly with-the-rule astigmatism (J0 = 0.01 ± 0.11). The prevalence estimates were as follows: myopia (at least -0.50) 7% (95% confidence interval [CI], 6 to 9%), hyperopia (at least 0.5) 5% (95% CI, 4 to 6%), astigmatism (at least -0.75 cylinder) 3% (95% CI, 2 to 4%), and anisometropia 3% (95% CI, 2 to 4%). There was no significant association between refractive error and any of the categories (gender, age, and grade levels). The prevalence of refractive error in the sample of high school children was relatively low. Myopia was the most prevalent, and findings on its association with age suggest that the prevalence of myopia may be stabilizing at late teenage years.
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Experience gained in testing a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Calibration of Contactless Pulse Oximetry
Bartula, Marek; Bresch, Erik; Rocque, Mukul; Meftah, Mohammed; Kirenko, Ihor
2017-01-01
BACKGROUND: Contactless, camera-based photoplethysmography (PPG) interrogates shallower skin layers than conventional contact probes, either transmissive or reflective. This raises questions on the calibratability of camera-based pulse oximetry. METHODS: We made video recordings of the foreheads of 41 healthy adults at 660 and 840 nm, and remote PPG signals were extracted. Subjects were in normoxic, hypoxic, and low temperature conditions. Ratio-of-ratios were compared to reference Spo2 from 4 contact probes. RESULTS: A calibration curve based on artifact-free data was determined for a population of 26 individuals. For an Spo2 range of approximately 83% to 100% and discarding short-term errors, a root mean square error of 1.15% was found with an upper 99% one-sided confidence limit of 1.65%. Under normoxic conditions, a decrease in ambient temperature from 23 to 7°C resulted in a calibration error of 0.1% (±1.3%, 99% confidence interval) based on measurements for 3 subjects. PPG signal strengths varied strongly among individuals from about 0.9 × 10−3 to 4.6 × 10−3 for the infrared wavelength. CONCLUSIONS: For healthy adults, the results present strong evidence that camera-based contactless pulse oximetry is fundamentally feasible because long-term (eg, 10 minutes) error stemming from variation among individuals expressed as A*rms is significantly lower (<1.65%) than that required by the International Organization for Standardization standard (<4%) with the notion that short-term errors should be added. A first illustration of such errors has been provided with A**rms = 2.54% for 40 individuals, including 6 with dark skin. Low signal strength and subject motion present critical challenges that will have to be addressed to make camera-based pulse oximetry practically feasible. PMID:27258081
Neurofeedback training for peak performance.
Graczyk, Marek; Pąchalska, Maria; Ziółkowski, Artur; Mańko, Grzegorz; Łukaszewska, Beata; Kochanowicz, Kazimierz; Mirski, Andrzej; Kropotov, Iurii D
2014-01-01
One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneous EEG and event related potentials (ERPs). The case is presented of an Olympic athlete who has lost his performance confidence after injury in sport. He wanted to resume his activities by means of neurofeedback training. His QEEG/ERP parameters were assessed before and after 4 intensive sessions of neurotherapy. Dramatic and statistically significant changes that could not be explained by error measurement were observed in the patient. Neurofeedback training in the subject under study increased the amplitude of the monitoring component of ERPs generated in the anterior cingulate cortex, accompanied by an increase in beta activity over the medial prefrontal cortex. Taking these changes together, it can be concluded that that even a few sessions of neurofeedback in a high performance brain can significantly activate the prefrontal cortical areas associated with increasing confidence in sport performance.
A high-resolution oxygen A-band spectrometer (HABS) and its radiation closure
NASA Astrophysics Data System (ADS)
Min, Q.; Yin, B.; Li, S.; Berndt, J.; Harrison, L.; Joseph, E.; Duan, M.; Kiedron, P.
2014-06-01
Various studies indicate that high-resolution oxygen A-band spectrum has the capability to retrieve the vertical profiles of aerosol and cloud properties. To improve the understanding of oxygen A-band inversions and utility, we developed a high-resolution oxygen A-band spectrometer (HABS), and deployed it at Howard University Beltsville site during the NASA Discover Air-Quality Field Campaign in July, 2011. By using a single telescope, the HABS instrument measures the direct solar and the zenith diffuse radiation subsequently. HABS exhibits excellent performance: stable spectral response ratio, high signal-to-noise ratio (SNR), high-spectrum resolution (0.016 nm), and high out-of-band rejection (10-5). For the spectral retrievals of HABS measurements, a simulator is developed by combining a discrete ordinates radiative transfer code (DISORT) with the High Resolution Transmission (HITRAN) database HITRAN2008. The simulator uses a double-k approach to reduce the computational cost. The HABS-measured spectra are consistent with the related simulated spectra. For direct-beam spectra, the discrepancies between measurements and simulations, indicated by confidence intervals (95%) of relative difference, are (-0.06, 0.05) and (-0.08, 0.09) for solar zenith angles of 27 and 72°, respectively. For zenith diffuse spectra, the related discrepancies between measurements and simulations are (-0.06, 0.05) and (-0.08, 0.07) for solar zenith angles of 27 and 72°, respectively. The main discrepancies between measurements and simulations occur at or near the strong oxygen absorption line centers. They are mainly due to two kinds of causes: (1) measurement errors associated with the noise/spikes of HABS-measured spectra, as a result of combined effects of weak signal, low SNR, and errors in wavelength registration; (2) modeling errors in the simulation, including the error of model parameters setting (e.g., oxygen absorption line parameters, vertical profiles of temperature and pressure) and the lack of treatment of the rotational Raman scattering. The high-resolution oxygen A-band measurements from HABS can constrain the active radar retrievals for more accurate cloud optical properties (e.g., cloud optical depth, effective radius), particularly for multi-layer clouds and for mixed-phase clouds.
Medication Administration Errors in Nursing Homes Using an Automated Medication Dispensing System
van den Bemt, Patricia M.L.A.; Idzinga, Jetske C.; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
Objective To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. Design The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. Measurements Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. Results In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05–1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66–46.50), medication crushed (OR 7.83; 95% CI 5.40–11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01–1.05), nursing home 2 (OR 3.97; 95% CI 2.86–5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04–4.18), time classes “7–10 am” (OR 2.28; 95% CI 1.50–3.47) and “10 am-2 pm” (OR 1.96; 1.18–3.27) and day of the week “Wednesday” (OR 1.46; 95% CI 1.03–2.07) are associated with a higher risk of administration errors. Conclusions Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload. PMID:19390109
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona
2004-08-01
Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Malpractice suits in chest radiology: an evaluation of the histories of 8265 radiologists.
Baker, Stephen R; Patel, Ronak H; Yang, Lily; Lelkes, Valdis M; Castro, Alejandro
2013-11-01
The aim of this study was to present rates of claims, causes of error, percentage of cases resulting in a judgment, and average payments made by radiologists in chest-related malpractice cases in a survey of 8265 radiologists. The malpractice histories of 8265 radiologists were evaluated from the credentialing files of One-Call Medical Inc., a preferred provider organization for computed tomography/magnetic resonance imaging in workers' compensation cases. Of the 8265 radiologists, 2680 (32.4%) had at least 1 malpractice suit. Of those who were sued, the rate of claims was 55.1 per 1000 person years. The rate of thorax-related suits was 6.6 claims per 1000 radiology practice years (95% confidence interval, 6.0-7.2). There were 496 suits encompassing 48 different causes. Errors in diagnosis comprised 78.0% of the causes. Failure to diagnose lung cancer was by far the most frequent diagnostic error, representing 211 cases or 42.5%. Of the 496 cases, an outcome was known in 417. Sixty-one percent of these were settled in favor of the plaintiff, with a mean payment of $277,230 (95% confidence interval, 226,967-338,614). Errors in diagnosis, and among them failure to diagnose lung cancer, were by far the most common reasons for initiating a malpractice suit against radiologists related to the thorax and its contents.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Tabelow, Karsten; König, Reinhard; Polzehl, Jörg
2016-01-01
Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809
Accounting for dropout bias using mixed-effects models.
Mallinckrodt, C H; Clark, W S; David, S R
2001-01-01
Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.
Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan; ...
2016-10-18
Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less
Psychometric assessment of a scale to measure bonding workplace social capital
Tsutsumi, Akizumi; Inoue, Akiomi; Odagiri, Yuko
2017-01-01
Objectives Workplace social capital (WSC) has attracted increasing attention as an organizational and psychosocial factor related to worker health. This study aimed to assess the psychometric properties of a newly developed WSC scale for use in work environments, where bonding social capital is important. Methods We assessed the psychometric properties of a newly developed 6-item scale to measure bonding WSC using two data sources. Participants were 1,650 randomly selected workers who completed an online survey. Exploratory factor analyses were conducted. We examined the item–item and item–total correlations, internal consistency, and associations between scale scores and a previous 8-item measure of WSC. We evaluated test–retest reliability by repeating the survey with 900 of the respondents 2 weeks later. The overall scale reliability was quantified by an intraclass coefficient and the standard error of measurement. We evaluated convergent validity by examining the association with several relevant workplace psychosocial factors using a dataset from workers employed by an electrical components company (n = 2,975). Results The scale was unidimensional. The item–item and item–total correlations ranged from 0.52 to 0.78 (p < 0.01) and from 0.79 to 0.89 (p < 0.01), respectively. Internal consistency was good (Cronbach’s α coefficient: 0.93). The correlation with the 8-item scale indicated high criterion validity (r = 0.81) and the scale showed high test–retest reliability (r = 0.74, p < 0.01). The intraclass coefficient and standard error of measurement were 0.74 (95% confidence intervals: 0.71–0.77) and 4.04 (95% confidence intervals: 1.86–6.20), respectively. Correlations with relevant workplace psychosocial factors showed convergent validity. Conclusions The results confirmed that the newly developed WSC scale has adequate psychometric properties. PMID:28662058
Prevalence of refractive errors in a Brazilian population: the Botucatu eye study.
Schellini, Silvana Artioli; Durkin, Shane R; Hoyama, Erika; Hirai, Flavio; Cordeiro, Ricardo; Casson, Robert J; Selva, Dinesh; Padovani, Carlos Roberto
2009-01-01
To determine the prevalence and demographic associations of refractive error in Botucatu, Brazil. A population-based, cross-sectional prevalence study was conducted, which involved random, household cluster sampling of an urban Brazilian population in Botucatu. There were 3000 individuals aged 1 to 91 years (mean 38.3) who were eligible to participate in the study. Refractive error measurements were obtained by objective refraction. Objective refractive error examinations were performed on 2454 residents within this sample (81.8% of eligible participants). The mean age was 38 years (standard deviation (SD) 20.8 years, Range 1 to 91) and females comprised 57.5% of the study population. Myopia (spherical equivalent (SE) < -0.5 dropters (D)) was most prevalent among those aged 30-39 years (29.7%; 95% confidence interval (CI) 24.8-35.1) and least prevalent among children under 10 years (3.8%; 95% confidence interval (CI) 1.6-7.3). Conversely hypermetropia (SE > 0.5D) was most prevalent among participants under 10 years (86.9%; 95% CI 81.6-91.1) and least prevalent in the fourth decade (32.5%; 95% CI 28.2-37.0). Participants aged 70 years or older bore the largest burden of astigmatism (cylinder at least -0.5D) and anisometropia (difference in SE of > 0.5D) with a prevalence of 71.7% (95% CI 64.8-78.0) 55.0% (95% CI 47.6-62.2) respectively. Myopia and hypermetropia were significantly associated with age in a bimodal manner (P < 0.001), whereas anisometropia and astigmatism increased in line with age (P < 0.001). Multivariate modeling confirmed age-related risk factors for refractive error and revealed several gender, occupation and ethnic-related risk factors. These results represent previously unreported data on refractive error within this Brazilian population. They signal a need to continue to screen for refractive error within this population and to ensure that people have adequate access to optical correction.
[Investigation of Color Vision Using Pigment Color Plates and a Tablet PC].
Tsimpri, P; Kuchenbecker, J
2016-07-01
Many applications (apps) for ophthalmic solutions, including colour vision tests, are currently available. However, no colour vision test app has been evaluated through clinical trials on a tablet PC. Using standard test conditions and a tablet pc (iPad2®), colour vision tests were performed with 19 Velhagen/Broschmann/Kuchenbecker colour plates and an HMC anomaloscope. The plates were alternately presented at first in a book (pigment colour plates) and then in a tablet PC (iPad®). A total of 77 volunteer subjects were examined. 62 subjects were colour normal and 15 male subjects had a colour vision deficiency. The coincidence and the 95 % confidence intervals were determined. The average age of all subjects (n = 77) was 42.8 ± 16.9 years. The mean near visual acuity of all subjects was 0.99 ± 0.15. The coincidence of the results of all subjects between books and tablet PC was 88.0 %. The 95 % confidence interval ranged from 81.6 to 89.6 %. In the group of subjects with colour vision deficiency (n = 15), the coincidence was 83.3 %. The 95 % confidence interval ranged from 78.4 to 87.3 %. In the group of subjects without colour vision deficiency (n = 62), the coincidence was 89.1 %. The 95 % confidence interval ranged from 87.1 to 90.8 %. The overlap of error numbers of colour normal subjects and colour vision deficiency subjects was 2 errors with the book and 5 errors with the tablet pc. Testing colour vision using book and tablet pc only gives roughly comparable results. However, separation with the book was better and the colour plates differed in validity. For this reason, only some of the colour plates could be used on a tablet PC. Georg Thieme Verlag KG Stuttgart · New York.
Net Weight Issue LLNL DOE-STD-3013 Containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilk, P
2008-01-16
The following position paper will describe DOE-STD-3013 container sets No.L000072 and No.L000076, and how they are compliant with DOE-STD-3013-2004. All masses of accountable nuclear materials are measured on LLNL certified balances maintained under an MC&A Program approved by DOE/NNSA LSO. All accountability balances are recalibrated annually and checked to be within calibration on each day that the balance is used for accountability purposes. A statistical analysis of the historical calibration checks from the last seven years indicates that the full-range Limit of Error (LoE, 95% confidence level) for the balance used to measure the mass of the contents of themore » above indicated 3013 containers is 0.185 g. If this error envelope, at the 95% confidence level, were to be used to generate an upper-limit to the measured weight of the containers No.L000072 and No.L000076, the error-envelope would extend beyond the 5.0 kg 3013-standard limit on the package contents by less than 0.3 g. However, this is still well within the intended safety bounds of DOE-STD-3013-2004.« less
Sample size determination in combinatorial chemistry.
Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K
1995-01-01
Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586
Burmeister Getz, E; Carroll, K J; Mielke, J; Benet, L Z; Jones, B
2017-03-01
We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)-approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between-batch bio-inequivalence. Here, we provide independent confirmation of pharmacokinetic bio-inequivalence among Advair Diskus 100/50 batches, and quantify residual and between-batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two-way crossover design recommendation. When between-batch pharmacokinetic variability is substantial, the conventional two-way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two-way crossover, which ignores between-batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between-batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Toward diagnostic and phenotype markers for genetically transmitted speech delay.
Shriberg, Lawrence D; Lewis, Barbara A; Tomblin, J Bruce; McSweeny, Jane L; Karlsson, Heather B; Scheer, Alison R
2005-08-01
Converging evidence supports the hypothesis that the most common subtype of childhood speech sound disorder (SSD) of currently unknown origin is genetically transmitted. We report the first findings toward a set of diagnostic markers to differentiate this proposed etiological subtype (provisionally termed speech delay-genetic) from other proposed subtypes of SSD of unknown origin. Conversational speech samples from 72 preschool children with speech delay of unknown origin from 3 research centers were selected from an audio archive. Participants differed on the number of biological, nuclear family members (0 or 2+) classified as positive for current and/or prior speech-language disorder. Although participants in the 2 groups were found to have similar speech competence, as indexed by their Percentage of Consonants Correct scores, their speech error patterns differed significantly in 3 ways. Compared with children who may have reduced genetic load for speech delay (no affected nuclear family members), children with possibly higher genetic load (2+ affected members) had (a) a significantly higher proportion of relative omission errors on the Late-8 consonants; (b) a significantly lower proportion of relative distortion errors on these consonants, particularly on the sibilant fricatives /s/, /z/, and //; and (c) a significantly lower proportion of backed /s/ distortions, as assessed by both perceptual and acoustic methods. Machine learning routines identified a 3-part classification rule that included differential weightings of these variables. The classification rule had diagnostic accuracy value of 0.83 (95% confidence limits = 0.74-0.92), with positive and negative likelihood ratios of 9.6 (95% confidence limits = 3.1-29.9) and 0.40 (95% confidence limits = 0.24-0.68), respectively. The diagnostic accuracy findings are viewed as promising. The error pattern for this proposed subtype of SSD is viewed as consistent with the cognitive-linguistic processing deficits that have been reported for genetically transmitted verbal disorders.
Loughery, Brian; Knill, Cory; Silverstein, Evan; Zakjevskii, Viatcheslav; Masi, Kathryn; Covington, Elizabeth; Snyder, Karen; Song, Kwang; Snyder, Michael
2018-03-20
We conducted a multi-institutional assessment of a recently developed end-to-end monthly quality assurance (QA) protocol for external beam radiation therapy treatment chains. This protocol validates the entire treatment chain against a baseline to detect the presence of complex errors not easily found in standard component-based QA methods. Participating physicists from 3 institutions ran the end-to-end protocol on treatment chains that include Imaging and Radiation Oncology Core (IROC)-credentialed linacs. Results were analyzed in the form of American Association of Physicists in Medicine (AAPM) Task Group (TG)-119 so that they may be referenced by future test participants. Optically stimulated luminescent dosimeter (OSLD), EBT3 radiochromic film, and A1SL ion chamber readings were accumulated across 10 test runs. Confidence limits were calculated to determine where 95% of measurements should fall. From calculated confidence limits, 95% of measurements should be within 5% error for OSLDs, 4% error for ionization chambers, and 4% error for (96% relative gamma pass rate) radiochromic film at 3% agreement/3 mm distance to agreement. Data were separated by institution, model of linac, and treatment protocol (intensity-modulated radiation therapy [IMRT] vs volumetric modulated arc therapy [VMAT]). A total of 97% of OSLDs, 98% of ion chambers, and 93% of films were within the confidence limits; measurements were found outside these limits by a maximum of 4%, < 1%, and < 1%, respectively. Data were consistent despite institutional differences in OSLD reading equipment and radiochromic film calibration techniques. Results from this test may be used by clinics for data comparison. Areas of improvement were identified in the end-to-end protocol that can be implemented in an updated version. The consistency of our data demonstrates the reproducibility and ease-of-use of such tests and suggests a potential role for their use in broad end-to-end QA initiatives. Copyright © 2018 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakos, James T.; Suo-Anttila, Jill M.; Zepper, Ethan T.
Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of themore » data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .« less
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Increasing Product Confidence-Shifting Paradigms.
Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew
2015-01-01
Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers with a newfound respect for their suppliers, and it will allow manufacturers to finally address true root causes that can lead to a marked increase in product confidence. In the past decade, pharmaceutical, medical device, and food manufacturers have increased their focus on controlling and managing the performance of their suppliers in an effort to improve the confidence of the materials going into the final marketed products and to improve patient and customer confidence in final product reliability and safety. Concerned that product confidence has not improved, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and U.S. Food and Drug Administration officials. Through this initiative, data generated has revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. Product confidence can be improved by shifting the focus from controlling supplier practices to controlling the practices of the manufacturers themselves. © PDA, Inc. 2015.
Smith, Kenneth J; Handler, Steven M; Kapoor, Wishwa N; Martich, G Daniel; Reddy, Vivek K; Clark, Sunday
2016-07-01
This study sought to determine the effects of automated primary care physician (PCP) communication and patient safety tools, including computerized discharge medication reconciliation, on discharge medication errors and posthospitalization patient outcomes, using a pre-post quasi-experimental study design, in hospitalized medical patients with ≥2 comorbidities and ≥5 chronic medications, at a single center. The primary outcome was discharge medication errors, compared before and after rollout of these tools. Secondary outcomes were 30-day rehospitalization, emergency department visit, and PCP follow-up visit rates. This study found that discharge medication errors were lower post intervention (odds ratio = 0.57; 95% confidence interval = 0.44-0.74; P < .001). Clinically important errors, with the potential for serious or life-threatening harm, and 30-day patient outcomes were not significantly different between study periods. Thus, automated health system-based communication and patient safety tools, including computerized discharge medication reconciliation, decreased hospital discharge medication errors in medically complex patients. © The Author(s) 2015.
Nishiura, K
1998-08-01
With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.
Optimization of traffic data collection for specific pavement design applications.
DOT National Transportation Integrated Search
2006-05-01
The objective of this study is to establish the minimum traffic data collection effort required for pavement design applications satisfying a maximum acceptable error under a prescribed confidence level. The approach consists of simulating the traffi...
Bao, Yi; Chen, Yizheng; Hoehler, Matthew S.; Smith, Christopher M.; Bundy, Matthew; Chen, Genda
2016-01-01
This paper presents high temperature measurements using a Brillouin scattering-based fiber optic sensor and the application of the measured temperatures and building code recommended material parameters into enhanced thermomechanical analysis of simply supported steel beams subjected to combined thermal and mechanical loading. The distributed temperature sensor captures detailed, nonuniform temperature distributions that are compared locally with thermocouple measurements with less than 4.7% average difference at 95% confidence level. The simulated strains and deflections are validated using measurements from a second distributed fiber optic (strain) sensor and two linear potentiometers, respectively. The results demonstrate that the temperature-dependent material properties specified in the four investigated building codes lead to strain predictions with less than 13% average error at 95% confidence level and that the Europe building code provided the best predictions. However, the implicit consideration of creep in Europe is insufficient when the beam temperature exceeds 800°C. PMID:28239230
Learning from error: leading a culture of safety.
Gibson, Russell; Armstrong, Alexander; Till, Alex; McKimm, Judy
2017-07-02
A recent shift towards more collective leadership in the NHS can help to achieve a culture of safety, particularly through encouraging frontline staff to participate and take responsibility for improving safety through learning from error and near misses. Leaders must ensure that they provide psychological safety, organizational fairness and learning systems for staff to feel confident in raising concerns, that they have the autonomy and skills to lead continual improvement, and that they have responsibility for spreading this learning within and across organizations.
Test of the cosmic transparency with the standard candles and the standard ruler
NASA Astrophysics Data System (ADS)
Chen, Jun
In this paper, the cosmic transparency is constrained by using the latest baryon acoustic oscillation (BAO) data and the type Ia supernova data with a model-independent method. We find that a transparent universe is consistent with observational data at the 1σ confidence level, except for the case of BAO+ Union 2.1 without the systematic errors where a transparent universe is favored only at the 2σ confidence level. To investigate the effect of the uncertainty of the Hubble constant on the test of the cosmic opacity, we assume h to be a free parameter and obtain that the observations favor a transparent universe at the 1σ confidence level.
Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.
2006-01-01
Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.
Speech Correction for Children with Cleft Lip and Palate by Networking of Community-Based Care.
Hanchanlert, Yotsak; Pramakhatay, Worawat; Pradubwong, Suteera; Prathanee, Benjamas
2015-08-01
Prevalence of cleft lip and palate (CLP) is high in Northeast Thailand. Most children with CLP face many problems, particularly compensatory articulation disorders (CAD) beyond surgery while speech services and the number of speech and language pathologists (SLPs) are limited. To determine the effectiveness of networking of Khon Kaen University (KKU) Community-Based Speech Therapy Model: Kosumphisai Hospital, Kosumphisai District and Maha Sarakham Hospital, Mueang District, Maha Sarakham Province for reduction of the number of articulations errors for children with CLP. Eleven children with CLP were recruited in 3 1-year projects of KKU Community-Based Speech Therapy Model. Articulation tests were formally assessed by qualified language pathologists (SLPs) for baseline and post treatment outcomes. Teachings on services for speech assistants (SAs) were conducted by SLPs. Assigned speech correction (SC) was performed by SAs at home and at local hospitals. Caregivers also gave SC at home 3-4 days a week. Networking of Community-Based Speech Therapy Model signficantly reduced the number of articulation errors for children with CLP in both word and sentence levels (mean difference = 6.91, 95% confidence interval = 4.15-9.67; mean difference = 5.36, 95% confidence interval = 2.99-7.73, respectively). Networking by Kosumphisai and Maha Sarakham of KKU Community-Based Speech Therapy Model was a valid and efficient method for providing speech services for children with cleft palate and could be extended to any area in Thailand and other developing countries, where have similar contexts.
NASA Astrophysics Data System (ADS)
Loustau, D.; Berbigier, P.; Granier, A.; Moussa, F. El Hadj
1992-10-01
Patterns of spatial variability of throughfall and stemflow were determined in a maritime pine ( Pinus pinaster Ait.) stand for two consecutive years. Data were obtained from 52 fixed rain gauges and 12 stemflow measuring devices located in a 50m × 50m plot at the centre of an 18-year-old stand. The pine trees had been sown in rows 4m apart and had reached an average height of 12.6m. The spatial distribution of stems had a negligible effect on the throughfall partitioning beneath the canopy. Variograms of throughfall computed for a sample of storms did not reveal any spatial autocorrelation of throughfall for the sampling design used. Differences in throughfall, in relation to the distance from the rows, were not consistently significant. In addition, the distance from the tree stem did not influence the amount of throughfall. The confidence interval on the amount of throughfall per storm was between 3 and 8%. The stemflow was highly variable between trees. The effect of individual trees on stemflow was significant but the amount of stemflow per tree was not related to tree size (i.e. height, trunk diameter, etc.). The cumulative sampling errors on stemflow and throughfall for a single storm created a confidence interval of between ±7 and ±51% on interception. This resulted mainly from the low interception rate and sampling error on throughfall.
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog (Thompson et al. 1995) and its supplement (Thompson et al. 1996), this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
NASA Astrophysics Data System (ADS)
Aschwanden, Andy; Bueler, Ed; Khroulev, Constantine
2010-05-01
To predict Greenland's contribution to global sea level rise in the next few centuries with some confidence, an accurate representation of its current state is crucial. Simulations of the present state of Greenland using the "Parallel Ice Sheet Model" (PISM) capture the essential flow features but overestimate the current volume by about 30%. Possible sources of error include (1) limited understanding of physical processes involved, (2) the choice of approximations made by the numerical model, (3) values of tunable parameters, and (4) uncertainties in boundary conditions. The response of an ice sheet model to given forcing contains the above mentioned error sources, with unknown weights. In this work we focus on a small subset, namely errors arising from uncertainties in bed elevation and whether or not membrane stresses are included in the stress balance. CReSIS provides recently updated bedrock maps for Greenland include high-resolution data for Jacobshavn Isbræ and Petermann Glacier. We present a four-way comparison between the original BEDMAP, the new CReSIS bedrock data, a non-sliding shallow ice model, and hybrid model which includes the shallow shelf approximation as a sliding law. Large gradients possibly found in high-resolution bedrock elevation are expected to make a hybrid model the more appropriate choice. To elucidate this question, runs are performed on a unprecedented high spatial resolution of 2km for the whole ice sheet. Finally, model predictions are evaluated against observed quantities such as surface velocities, ice thickness, and temperature profiles in bore holes using different metrics.
Inverse relationship between sleep duration and myopia.
Jee, Donghyun; Morgan, Ian G; Kim, Eun Chul
2016-05-01
To investigate the association between sleep duration and myopia. This population-based, cross-sectional study using a nationwide, systemic, stratified, multistage, clustered sampling method included a total of 3625 subjects aged 12-19 years who participated in the Korean National Health and Nutrition Examination Survey 2008-2012. All participants underwent ophthalmic examination and a standardized interview including average sleep duration (hr/day), education, physical activity and economic status (annual household income). Refractive error was measured by autorefraction without cycloplegia. Myopia and high myopia were defined as ≤-0.50 dioptres (D) and ≤-6.0 D, respectively. Sleep durations were classified into 5 categories: <5, 6, 7, 8 and >9 hr. The overall prevalence of myopia and high myopia were 77.8% and 9.4%, respectively, and the overall sleep duration was 7.1 hr/day. The refractive error increased by 0.10 D per 1 hr increase in sleep after adjusting for potential confounders including sex, age, height, education level, economic status and physical activity. The adjusted odds ratio (OR) for refractive error was 0.90 (95% confidence interval [CI], 0.83-0.97) per 1 hr increase in sleep. The adjusted OR for myopia was decreased in those with >9 hr of sleep (OR, 0.59; 95% CI, 0.38-0.93; p for trend = 0.006) than in those with <5 hr of sleep. However, high myopia was not associated with sleep duration. This study provides the population-based, epidemiologic evidence for an inverse relationship between sleep duration and myopia in a representative population of Korean adolescents. © 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Cook, David A; Dupras, Denise M; Beckman, Thomas J; Thomas, Kris G; Pankratz, V Shane
2009-01-01
Mini-CEX scores assess resident competence. Rater training might improve mini-CEX score interrater reliability, but evidence is lacking. Evaluate a rater training workshop using interrater reliability and accuracy. Randomized trial (immediate versus delayed workshop) and single-group pre/post study (randomized groups combined). Academic medical center. Fifty-two internal medicine clinic preceptors (31 randomized and 21 additional workshop attendees). The workshop included rater error training, performance dimension training, behavioral observation training, and frame of reference training using lecture, video, and facilitated discussion. Delayed group received no intervention until after posttest. Mini-CEX ratings at baseline (just before workshop for workshop group), and four weeks later using videotaped resident-patient encounters; mini-CEX ratings of live resident-patient encounters one year preceding and one year following the workshop; rater confidence using mini-CEX. Among 31 randomized participants, interrater reliabilities in the delayed group (baseline intraclass correlation coefficient [ICC] 0.43, follow-up 0.53) and workshop group (baseline 0.40, follow-up 0.43) were not significantly different (p = 0.19). Mean ratings were similar at baseline (delayed 4.9 [95% confidence interval 4.6-5.2], workshop 4.8 [4.5-5.1]) and follow-up (delayed 5.4 [5.0-5.7], workshop 5.3 [5.0-5.6]; p = 0.88 for interaction). For the entire cohort, rater confidence (1 = not confident, 6 = very confident) improved from mean (SD) 3.8 (1.4) to 4.4 (1.0), p = 0.018. Interrater reliability for ratings of live encounters (entire cohort) was higher after the workshop (ICC 0.34) than before (ICC 0.18) but the standard error of measurement was similar for both periods. Rater training did not improve interrater reliability or accuracy of mini-CEX scores. clinicaltrials.gov identifier NCT00667940
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Pan, Chen-Wei; Zheng, Ying-Feng; Anuar, Ainur Rahman; Chew, Merwyn; Gazzard, Gus; Aung, Tin; Cheng, Ching-Yu; Wong, Tien Y; Saw, Seang-Mei
2013-04-09
To determine the prevalence of refractive errors in a multiethnic Asian population aged over 40 years and to examine secular trends and racial differences. A total of 10,033 adults (3353 Chinese, 3400 Indians, and 3280 Malays) participated in this study. Refractive error was determined by subjective refraction. Ocular biometric parameters were determined by partial coherence interferometry. Myopia and high myopia were defined as spherical equivalent (SE) of less than -0.5 diopters (D) and -5.0 D, respectively. Hyperopia was defined as SE of more than 0.5 D. Astigmatism was defined as cylinders less than -0.5 D. The prevalence of myopia, high myopia, hyperopia and astigmatism in Singapore adults aged over 40 years was 38.9% (95% confidence interval [CI] 37.1, 40.6); 8.4% (95% CI 8.0, 8.9); 31.5% (95% 30.5, 32.5); and 58.8% (95% CI 57.8, 59.9), respectively. Compared with the Tanjong Pagar Survey 12 years ago, there was a significant increase in the prevalence of astigmatism and mean axial length (AL) in Chinese adults aged over 40 years in Singapore. Chinese were most likely to be affected by myopia, high myopia, astigmatism, and had the longest AL among the three racial groups. The prevalence of myopia in Singapore adults is lower compared with the younger "myopia" generation in Singapore. The prevalence of astigmatism and mean AL have been increasing significantly within the past 12 years in the Chinese population. Chinese adults had higher prevalence of myopia, high myopia, astigmatism, as well as the longer AL compared with non-Chinese adults in Singapore.
Using expected sequence features to improve basecalling accuracy of amplicon pyrosequencing data.
Rask, Thomas S; Petersen, Bent; Chen, Donald S; Day, Karen P; Pedersen, Anders Gorm
2016-04-22
Amplicon pyrosequencing targets a known genetic region and thus inherently produces reads highly anticipated to have certain features, such as conserved nucleotide sequence, and in the case of protein coding DNA, an open reading frame. Pyrosequencing errors, consisting mainly of nucleotide insertions and deletions, are on the other hand likely to disrupt open reading frames. Such an inverse relationship between errors and expectation based on prior knowledge can be used advantageously to guide the process known as basecalling, i.e. the inference of nucleotide sequence from raw sequencing data. The new basecalling method described here, named Multipass, implements a probabilistic framework for working with the raw flowgrams obtained by pyrosequencing. For each sequence variant Multipass calculates the likelihood and nucleotide sequence of several most likely sequences given the flowgram data. This probabilistic approach enables integration of basecalling into a larger model where other parameters can be incorporated, such as the likelihood for observing a full-length open reading frame at the targeted region. We apply the method to 454 amplicon pyrosequencing data obtained from a malaria virulence gene family, where Multipass generates 20 % more error-free sequences than current state of the art methods, and provides sequence characteristics that allow generation of a set of high confidence error-free sequences. This novel method can be used to increase accuracy of existing and future amplicon sequencing data, particularly where extensive prior knowledge is available about the obtained sequences, for example in analysis of the immunoglobulin VDJ region where Multipass can be combined with a model for the known recombining germline genes. Multipass is available for Roche 454 data at http://www.cbs.dtu.dk/services/MultiPass-1.0 , and the concept can potentially be implemented for other sequencing technologies as well.
Extended use of Kinesiology Tape and Balance in Participants with Chronic Ankle Instability.
Jackson, Kristen; Simon, Janet E; Docherty, Carrie L
2016-01-01
Participants with chronic ankle instability (CAI) have been shown to have balance deficits related to decreased proprioception and neuromuscular control. Kinesiology tape (KT) has been proposed to have many benefits, including increased proprioception. To determine if KT can help with balance deficits associated with CAI. Cohort study. Research laboratory. Thirty participants with CAI were recruited for this study. Balance was assessed using the Balance Error Scoring System (BESS). Participants were pretested and then randomly assigned to either the control or KT group. The participants in the KT group had 4 strips applied to the foot and lower leg and were instructed to leave the tape on until they returned for testing. All participants returned 48 hours later for another BESS assessment. The tape was then removed, and all participants returned 72 hours later to complete the final BESS assessment. Total BESS errors. Differences between the groups occurred at 48 hours post-application of the tape (mean difference = 4.7 ± 1.4 errors, P < .01; 95% confidence interval = 2.0, 7.5) and at 72 hours post-removal of the tape (mean difference = 2.3 ± 1.1 errors, P = .04; 95% confidence interval = 0.1, 4.6). The KT improved balance after it had been applied for 48 hours when compared with the pretest and with the control group. One of the most clinically important findings is that balance improvements were retained even after the tape had been removed for 72 hours.
Refractive errors and schizophrenia.
Caspi, Asaf; Vishne, Tali; Reichenberg, Abraham; Weiser, Mark; Dishon, Ayelet; Lubin, Gadi; Shmushkevitz, Motti; Mandel, Yossi; Noy, Shlomo; Davidson, Michael
2009-02-01
Refractive errors (myopia, hyperopia and amblyopia), like schizophrenia, have a strong genetic cause, and dopamine has been proposed as a potential mediator in their pathophysiology. The present study explored the association between refractive errors in adolescence and schizophrenia, and the potential familiality of this association. The Israeli Draft Board carries a mandatory standardized visual accuracy assessment. 678,674 males consecutively assessed by the Draft Board and found to be psychiatrically healthy at age 17 were followed for psychiatric hospitalization with schizophrenia using the Israeli National Psychiatric Hospitalization Case Registry. Sib-ships were also identified within the cohort. There was a negative association between refractive errors and later hospitalization for schizophrenia. Future male schizophrenia patients were two times less likely to have refractive errors compared with never-hospitalized individuals, controlling for intelligence, years of education and socioeconomic status [adjusted Hazard Ratio=.55; 95% confidence interval .35-.85]. The non-schizophrenic male siblings of schizophrenia patients also had lower prevalence of refractive errors compared to never-hospitalized individuals. Presence of refractive errors in adolescence is related to lower risk for schizophrenia. The familiality of this association suggests that refractive errors may be associated with the genetic liability to schizophrenia.
Uy, Raymonde Charles; Sarmiento, Raymond Francis; Gavino, Alex; Fontelo, Paul
2014-01-01
Clinical decision-making involves the interplay between cognitive processes and physicians' perceptions of confidence in the context of their information-seeking behavior. The objectives of the study are: to examine how these concepts interact, to determine whether physician confidence, defined in relation to information need, affects clinical decision-making, and if information access improves decision accuracy. We analyzed previously collected data about resident physicians' perceptions of information need from a study comparing abstracts and full-text articles in clinical decision accuracy. We found that there is a significant relation between confidence and accuracy (φ=0.164, p<0.01). We also found various differences in the alignment of confidence and accuracy, demonstrating the concepts of underconfidence and overconfidence across years of clinical experience. Access to online literature also has a significant effect on accuracy (p<0.001). These results highlight possible CDSS strategies to reduce medical errors.
NASA Astrophysics Data System (ADS)
Divine, D.; Godtliebsen, F.; Rue, H.
2012-04-01
Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.
Uncertainty in accounting for carbon accumulation following forest harvesting
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Arthur, M. A.; Bae, K.; Hamburg, S.; Levine, C. R.; Vadeboncoeur, M. A.
2014-12-01
Tree biomass and forest soils are both difficult to quantify with confidence, for different reasons. Forest biomass is estimated non-destructively using allometric equations, often from other sites; these equations are difficult to validate. Forest soils are destructively sampled, resulting in little measurement error at a point, but with large sampling error in heterogeneous soil environments, such as in soils developed on glacial till. In this study, we report C contents of biomass and soil pools in northern hardwood stands in replicate plots within replicate stands in 3 age classes following clearcut harvesting (14-19 yr, 26-29 yr, and > 100 yr) at the Bartlett Experimental Forest, USA. The rate of C accumulation in aboveground biomass was ~3 Mg/ha/yr between the young and mid-aged stands and <1 Mg/ha/yr between the mid-aged and mature stands. We propagated model uncertainty through allometric equations, and found errors ranging from 3-7%, depending on the stand. The variation in biomass among plots within stands (6-19%) was always larger than the allometric uncertainties. Soils were described by quantitative soil pits in three plots per stand, excavated by depth increment to the C horizon. Variation in soil mass among pits within stands averaged 28% (coefficient of variation); variation among stands within an age class ranged from 9-25%. Variation in carbon concentrations averaged 27%, mainly because the depth increments contained varying proportions of genetic horizons, in the upper part of the soil profile. Differences across age classes in soil C were not significant, because of the high variability. Uncertainty analysis can help direct the design of monitoring schemes to achieve the greatest confidence in C stores per unit of sampling effort. In the system we studied, more extensive sampling would be the best approach to reducing uncertainty, as natural spatial variation was higher than model or measurement uncertainties.
Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons. PMID:24788812
A comparative experimental evaluation of uncertainty estimation methods for two-component PIV
NASA Astrophysics Data System (ADS)
Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos
2016-09-01
Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from approximately 65%-77% for PPR and MI methods, 40%-50% for IM and near 50% for CS. These observations illustrate some of the strengths and weaknesses of the methods considered herein and identify future directions for development and improvement.
Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John
2018-03-01
To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Chua, S S; Tea, M H; Rahman, M H A
2009-04-01
Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.
Rani, Padmaja Kumari; Raman, Rajiv; Rachapalli, Sudhir R; Kulothungan, Vaitheeswaran; Kumaramanickavel, Govindasamy; Sharma, Tarun
2010-06-01
To report the prevalence of refractive errors and the associated risk factors in subjects with type 2 diabetes mellitus from an urban Indian population. Population-based, cross-sectional study. One thousand eighty participants selected from a pool of 1414 subjects with diabetes. A population-based sample of 1414 persons (age >40 years) with diabetes (identified as per the World Health Organization criteria) underwent a comprehensive eye examination, including objective and subjective refractions. One thousand eighty subjects who were phakic in the right eye with best corrected visual acuity of > or =20/40 were included in the analysis for prevalence of refractive errors. Univariate and multivariate analyses were done to find out the independent risk factors associated with the refractive errors. The mean refraction was +0.20+/-1.72, and the Median, +0.25 diopters. The prevalence of emmetropia (spherical equivalent [SE], -0.50 to +0.50 diopter sphere [DS]) was 39.26%. The prevalence of myopia (SE <-0.50 DS), high myopia (SE <-5.00 DS), hyperopia (SE >+0.50 DS), and astigmatism (SE <-0.50 cyl) was 19.4%, 1.6%, 39.7%, and 47.4%, respectively. The advancing age was an important risk factor for the three refractive errors: for myopia, odds ratio (OR; 95% confidence interval [CI] 4.06 [1.74-9.50]; for hyperopia, OR [95% CI] 5.85 [2.56-13.39]; and for astigmatism, OR [95% CI] 2.51 [1.34-4.71]). Poor glycemic control was associated with myopia (OR [95% CI] 4.15 [1.44-11.92]) and astigmatism (OR [95% CI] 2.01 [1.04-3.88]). Female gender was associated with hyperopia alone) OR [95% CI] 2.00 [1.42-2.82]. The present population-based study from urban India noted a high prevalence of refractive errors (60%) among diabetic subjects >40 years old; the prevalence of astigmatism (47%) was higher than hyperopia (40%) or myopia (20%). Copyright 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Testing for clustering at many ranges inflates family-wise error rate (FWE).
Loop, Matthew Shane; McClure, Leslie A
2015-01-15
Testing for clustering at multiple ranges within a single dataset is a common practice in spatial epidemiology. It is not documented whether this approach has an impact on the type 1 error rate. We estimated the family-wise error rate (FWE) for the difference in Ripley's K functions test, when testing at an increasing number of ranges at an alpha-level of 0.05. Case and control locations were generated from a Cox process on a square area the size of the continental US (≈3,000,000 mi2). Two thousand Monte Carlo replicates were used to estimate the FWE with 95% confidence intervals when testing for clustering at one range, as well as 10, 50, and 100 equidistant ranges. The estimated FWE and 95% confidence intervals when testing 10, 50, and 100 ranges were 0.22 (0.20 - 0.24), 0.34 (0.31 - 0.36), and 0.36 (0.34 - 0.38), respectively. Testing for clustering at multiple ranges within a single dataset inflated the FWE above the nominal level of 0.05. Investigators should construct simultaneous critical envelopes (available in spatstat package in R), or use a test statistic that integrates the test statistics from each range, as suggested by the creators of the difference in Ripley's K functions test.
Kuenze, Christopher; Eltouhky, Moataz; Thomas, Abbey; Sutherlin, Mark; Hart, Joseph
2016-05-01
Collecting torque data using a multimode dynamometer is common in sports-medicine research. The error in torque measurements across multiple sites and dynamometers has not been established. To assess the validity of 2 calibration protocols across 3 dynamometers and the error associated with torque measurement for each system. Observational study. 3 university laboratories at separate institutions. 2 Biodex System 3 dynamometers and 1 Biodex System 4 dynamometer. System calibration was completed using the manufacturer-recommended single-weight method and an experimental calibration method using a series of progressive weights. Both calibration methods were compared with a manually calculated theoretical torque across a range of applied weights. Relative error, absolute error, and percent error were calculated at each weight. Each outcome variable was compared between systems using 95% confidence intervals across low (0-65 Nm), moderate (66-110 Nm), and high (111-165 Nm) torque categorizations. Calibration coefficients were established for each system using both calibration protocols. However, within each system the calibration coefficients generated using the single-weight (System 4 = 2.42 [0.90], System 3a = 1.37 [1.11], System 3b = -0.96 [1.45]) and experimental calibration protocols (System 4 = 3.95 [1.08], System 3a = -0.79 [1.23], System 3b = 2.31 [1.66]) were similar and displayed acceptable mean relative error compared with calculated theoretical torque values. Overall, percent error was greatest for all 3 systems in low-torque conditions (System 4 = 11.66% [6.39], System 3a = 6.82% [11.98], System 3b = 4.35% [9.49]). The System 4 significantly overestimated torque across all 3 weight increments, and the System 3b overestimated torque over the moderate-torque increment. Conversion of raw voltage to torque values using the single-calibration-weight method is valid and comparable to a more complex multiweight calibration process; however, it is clear that calibration must be done for each individual system to ensure accurate data collection.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Decroos, Francis Char; Stinnett, Sandra S; Heydary, Cynthia S; Burns, Russell E; Jaffe, Glenn J
2013-11-01
To determine the impact of segmentation error correction and precision of standardized grading of time domain optical coherence tomography (OCT) scans obtained during an interventional study for macular edema secondary to central retinal vein occlusion (CRVO). A reading center team of two readers and a senior reader evaluated 1199 OCT scans. Manual segmentation error correction (SEC) was performed. The frequency of SEC, resulting change in central retinal thickness after SEC, and reproducibility of SEC were quantified. Optical coherence tomography characteristics associated with the need for SECs were determined. Reading center teams graded all scans, and the reproducibility of this evaluation for scan quality at the fovea and cystoid macular edema was determined on 97 scans. Segmentation errors were observed in 360 (30.0%) scans, of which 312 were interpretable. On these 312 scans, the mean machine-generated central subfield thickness (CST) was 507.4 ± 208.5 μm compared to 583.0 ± 266.2 μm after SEC. Segmentation error correction resulted in a mean absolute CST correction of 81.3 ± 162.0 μm from baseline uncorrected CST. Segmentation error correction was highly reproducible (intraclass correlation coefficient [ICC] = 0.99-1.00). Epiretinal membrane (odds ratio [OR] = 2.3, P < 0.0001), subretinal fluid (OR = 2.1, P = 0.0005), and increasing CST (OR = 1.6 per 100-μm increase, P < 0.001) were associated with need for SEC. Reading center teams reproducibly graded scan quality at the fovea (87% agreement, kappa = 0.64, 95% confidence interval [CI] 0.45-0.82) and cystoid macular edema (92% agreement, kappa = 0.84, 95% CI 0.74-0.94). Optical coherence tomography images obtained during an interventional CRVO treatment trial can be reproducibly graded. Segmentation errors can cause clinically meaningful deviation in central retinal thickness measurements; however, these errors can be corrected reproducibly in a reading center setting. Segmentation errors are common on these images, can cause clinically meaningful errors in central retinal thickness measurement, and can be corrected reproducibly in a reading center setting.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P
2013-12-01
This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.
ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING
A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
NASA Astrophysics Data System (ADS)
Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar
2018-04-01
In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.
Verification of micro-scale photogrammetry for smooth three-dimensional object measurement
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard
2017-05-01
By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval.
Zhu, Mengjun; Tong, Xiaowei; Zhao, Rong; He, Xiangui; Zhao, Huijuan; Zhu, Jianfeng
2017-11-28
To investigate the prevalence and risk factors of undercorrected refractive error (URE) among people with diabetes in the Baoshan District of Shanghai, where data for undercorrected refractive error are limited. The study was a population-based survey of 649 persons (aged 60 years or older) with diabetes in Baoshan, Shanghai in 2009. One copy of the questionnaire was completed for each subject. Examinations included a standardized refraction and measurement of presenting and best-corrected visual acuity (BCVA), tonometry, slit lamp biomicroscopy, and fundus photography. The calculated age-standardized prevalence rate of URE was 16.63% (95% confidence interval [CI] 13.76-19.49). For visual impairment subjects (presenting vision worse than 20/40 in the better eye), the prevalence of URE was up to 61.11%, and 75.93% of subjects could achieve visual acuity improvement by at least one line using appropriate spectacles. Under multiple logistic regression analysis, older age, female gender, non-farmer, increasing degree of myopia, lens opacities status, diabetic retinopathy (DR), body mass index (BMI) index lower than normal, and poor glycaemic control were associated with higher URE levels. Wearing distance eyeglasses was a protective factor for URE. The undercorrected refractive error in diabetic adults was high in Shanghai. Health education and regular refractive assessment are needed for diabetic adults. Persons with diabetes should be more aware that poor vision is often correctable, especially for those with risk factors.
Frequency of under-corrected refractive errors in elderly Chinese in Beijing.
Xu, Liang; Li, Jianjun; Cui, Tongtong; Tong, Zhongbiao; Fan, Guizhi; Yang, Hua; Sun, Baochen; Zheng, Yuanyuan; Jonas, Jost B
2006-07-01
The aim of the study was to evaluate the prevalence of under-corrected refractive error among elderly Chinese in the Beijing area. The population-based, cross-sectional, cohort study comprised 4,439 subjects out of 5,324 subjects asked to participate (response rate 83.4%) with an age of 40+ years. It was divided into a rural part [1,973 (44.4%) subjects] and an urban part [2,466 (55.6%) subjects]. Habitual and best-corrected visual acuity was measured. Under-corrected refractive error was defined as an improvement in visual acuity of the better eye of at least two lines with best possible refractive correction. The rate of under-corrected refractive error was 19.4% (95% confidence interval, 18.2, 20.6). In a multiple regression analysis, prevalence and size of under-corrected refractive error in the better eye was significantly associated with lower level of education (P<0.001), female gender (P<0.001), and age (P=0.001). Under-correction of refractive error is relatively common among elderly Chinese in the Beijing area when compared with data from other populations.
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
Evaluating segmentation error without ground truth.
Kohlberger, Timo; Singh, Vivek; Alvino, Chris; Bahlmann, Claus; Grady, Leo
2012-01-01
The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of probabilistic boosting classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice.
Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun
2006-02-01
A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Jember, Abebaw; Hailu, Mignote; Messele, Anteneh; Demeke, Tesfaye; Hassen, Mohammed
2018-01-01
A medication error (ME) is any preventable event that may cause or lead to inappropriate medication use or patient harm. Voluntary reporting has a principal role in appreciating the extent and impact of medication errors. Thus, exploration of the proportion of medication error reporting and associated factors among nurses is important to inform service providers and program implementers so as to improve the quality of the healthcare services. Institution based quantitative cross-sectional study was conducted among 397 nurses from March 6 to May 10, 2015. Stratified sampling followed by simple random sampling technique was used to select the study participants. The data were collected using structured self-administered questionnaire which was adopted from studies conducted in Australia and Jordan. A pilot study was carried out to validate the questionnaire before data collection for this study. Bivariate and multivariate logistic regression models were fitted to identify factors associated with the proportion of medication error reporting among nurses. An adjusted odds ratio with 95% confidence interval was computed to determine the level of significance. The proportion of medication error reporting among nurses was found to be 57.4%. Regression analysis showed that sex, marital status, having made a medication error and medication error experience were significantly associated with medication error reporting. The proportion of medication error reporting among nurses in this study was found to be higher than other studies.
ERIC Educational Resources Information Center
Brown, Simon
2009-01-01
Many students have some difficulty with calculations. Simple dimensional analysis provides a systematic means of checking for errors and inconsistencies and for developing both new insight and new relationships between variables. Teaching dimensional analysis at even the most basic level strengthens the insight and confidence of students, and…
Statistics Using Just One Formula
ERIC Educational Resources Information Center
Rosenthal, Jeffrey S.
2018-01-01
This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…
Micro-organism distribution sampling for bioassays
NASA Technical Reports Server (NTRS)
Nelson, B. A.
1975-01-01
Purpose of sampling distribution is to characterize sample-to-sample variation so statistical tests may be applied, to estimate error due to sampling (confidence limits) and to evaluate observed differences between samples. Distribution could be used for bioassays taken in hospitals, breweries, food-processing plants, and pharmaceutical plants.
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
Phenomenology of the sound-induced flash illusion.
Abadi, Richard V; Murphy, Jonathan S
2014-07-01
Past studies, using pairings of auditory tones and visual flashes, which were static and coincident in space but variable in time, demonstrated errors in judging the temporal patterning of the visual flashes-the sound-induced flash illusion. These errors took one of the two forms: under-reporting (sound-induced fusion) or over-reporting (sound-induced fission) of the flash numbers. Our study had three objectives: to examine the robustness of both illusions and to consider the effects of stimulus set and response bias. To this end, we used an extended range of fixed spatial location flash-tone pairings, examined stimuli that were variable in space and time and measured confidence in judging flash numbers. Our results indicated that the sound-induced flash illusion is a robust percept, a finding underpinned by the confidence measures. Sound-induced fusion was found to be more robust than sound-induced fission and a most likely outcome when high numbers of flashes were incorporated within an incongruent flash-tone pairing. Conversely, sound-induced fission was the most likely outcome for the flash-tone pairing which contained two flashes. Fission was also shown to be strongly driven by stimuli confounds such as categorical boundary conditions (e.g. flash-tone pairings with ≤2 flashes) and compressed response options. These findings suggest whilst both fission and fusion are associated with 'auditory driving', the differences in the occurrence and strength of the two illusions not only reflect the separate neuronal mechanisms underlying audio and visual signal processing, but also the test conditions that have been used to investigate the sound-induced flash illusion.
NASA Astrophysics Data System (ADS)
Charonko, John J.; Vlachos, Pavlos P.
2013-06-01
Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing
2012-07-01
Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Gross, Eliza L.; Lindsey, Bruce D.; Rupert, Michael G.
2012-01-01
Field blank samples help determine the frequency and magnitude of contamination bias, and replicate samples help determine the sampling variability (error) of measured analyte concentrations. Quality control data were evaluated for calcium, magnesium, sodium, potassium, chloride, sulfate, fluoride, silica, and total dissolved solids. A 99-percent upper confidence limit is calculated from field blanks to assess the potential for contamination bias. For magnesium, potassium, chloride, sulfate, and fluoride, potential contamination in more than 95 percent of environmental samples is less than or equal to the common maximum reporting level. Contamination bias has little effect on measured concentrations greater than 4.74 mg/L (milligrams per liter) for calcium, 14.98 mg/L for silica, 4.9 mg/L for sodium, and 120 mg/L for total dissolved solids. Estimates of sampling variability are calculated for high and low ranges of concentration for major ions and total dissolved solids. Examples showing the calculation of confidence intervals and how to determine whether measured differences between two water samples are significant are presented.
Weeks, James L
2006-06-01
The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.
Leadership in the '80s: Essays on Higher Education.
ERIC Educational Resources Information Center
Argyris, Chris; Cyert, Richard M.
Two essays and two commentaries on leadership in higher education in the 1980s are presented. In "Education Administrators and Professionals," Chris Argyris considers the decline of public confidence in institutions and professionals by elaborating the concepts of single-loop (detecting and correcting error without altering underlying…
Motivation techniques for supervision
NASA Technical Reports Server (NTRS)
Gray, N. D.
1974-01-01
Guide has been published which deals with various aspects of employee motivation. Training methods are designed to improve communication between supervisors and subordinates, to create feeling of achievement and recognition for every employee, and to retain personnel confidence in spite of some negative motivators. End result of training is reduction or prevention of errors.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Voss, P. B.; Stimpfle, R. M.; Cohen, R. C.; Hanisco, T. F.; Bonne, G. P.; Perkins, K. K.; Lanzendorf, E. J.; Anderson, J. G.; Salawitch, R. J.
2001-01-01
We examine inorganic chlorine (Cly) partitioning in the summer lower stratosphere using in situ ER-2 aircraft observations made during the Photochemistry of Ozone Loss in the Arctic Region in Summer (POLARIS) campaign. New steady state and numerical models estimate [ClONO2]/[HCl] using currently accepted photochemistry. These models are tightly constrained by observations with OH (parameterized as a function of solar zenith angle) substituting for modeled HO2 chemistry. We find that inorganic chlorine photochemistry alone overestimates observed [ClONO2]/[HCl] by approximately 55-60% at mid and high latitudes. On the basis of POLARIS studies of the inorganic chlorine budget, [ClO]/[ClONO2], and an intercomparison with balloon observations, the most direct explanation for the model-measurement discrepancy in Cly partitioning is an error in the reactions, rate constants, and measured species concentrations linking HCl and ClO (simulated [ClO]/[HCl] too high) in combination with a possible systematic error in the ER-2 ClONO2 measurement (too low). The high precision of our simulation (+/-15% 1-sigma for [ClONO2]/[HCl], which is compared with observations) increases confidence in the observations, photolysis calculations, and laboratory rate constants. These results, along with other findings, should lead to improvements in both the accuracy and precision of stratospheric photochemical models.
SeqMule: automated pipeline for analysis of human exome/genome sequencing data.
Guo, Yunfei; Ding, Xiaolei; Shen, Yufeng; Lyon, Gholson J; Wang, Kai
2015-09-18
Next-generation sequencing (NGS) technology has greatly helped us identify disease-contributory variants for Mendelian diseases. However, users are often faced with issues such as software compatibility, complicated configuration, and no access to high-performance computing facility. Discrepancies exist among aligners and variant callers. We developed a computational pipeline, SeqMule, to perform automated variant calling from NGS data on human genomes and exomes. SeqMule integrates computational-cluster-free parallelization capability built on top of the variant callers, and facilitates normalization/intersection of variant calls to generate consensus set with high confidence. SeqMule integrates 5 alignment tools, 5 variant calling algorithms and accepts various combinations all by one-line command, therefore allowing highly flexible yet fully automated variant calling. In a modern machine (2 Intel Xeon X5650 CPUs, 48 GB memory), when fast turn-around is needed, SeqMule generates annotated VCF files in a day from a 30X whole-genome sequencing data set; when more accurate calling is needed, SeqMule generates consensus call set that improves over single callers, as measured by both Mendelian error rate and consistency. SeqMule supports Sun Grid Engine for parallel processing, offers turn-key solution for deployment on Amazon Web Services, allows quality check, Mendelian error check, consistency evaluation, HTML-based reports. SeqMule is available at http://seqmule.openbioinformatics.org.
Garibaldi, Brian Thomas; Niessen, Timothy; Gelber, Allan Charles; Clark, Bennett; Lee, Yizhen; Madrazo, Jose Alejandro; Manesh, Reza Sedighi; Apfel, Ariella; Lau, Brandyn D; Liu, Gigi; Canzoniero, Jenna VanLiere; Sperati, C John; Yeh, Hsin-Chieh; Brotman, Daniel J; Traill, Thomas A; Cayea, Danelle; Durso, Samuel C; Stewart, Rosalyn W; Corretti, Mary C; Kasper, Edward K; Desai, Sanjay V
2017-10-06
Physicians spend less time at the bedside in the modern hospital setting which has contributed to a decline in physical diagnosis, and in particular, cardiopulmonary examination skills. This trend may be a source of diagnostic error and threatens to erode the patient-physician relationship. We created a new bedside cardiopulmonary physical diagnosis curriculum and assessed its effects on post-graduate year-1 (PGY-1; interns) attitudes, confidence and skill. One hundred five internal medicine interns in a large U.S. internal medicine residency program participated in the Advancing Bedside Cardiopulmonary Examination Skills (ACE) curriculum while rotating on a general medicine inpatient service between 2015 and 2017. Teaching sessions included exam demonstrations using healthy volunteers and real patients, imaging didactics, computer learning/high-fidelity simulation, and bedside teaching with experienced clinicians. Primary outcomes were attitudes, confidence and skill in the cardiopulmonary physical exam as determined by a self-assessment survey, and a validated online cardiovascular examination (CE). Interns who participated in ACE (ACE interns) by mid-year more strongly agreed they had received adequate training in the cardiopulmonary exam compared with non-ACE interns. ACE interns were more confident than non-ACE interns in performing a cardiac exam, assessing the jugular venous pressure, distinguishing 'a' from 'v' waves, and classifying systolic murmurs as crescendo-decrescendo or holosystolic. Only ACE interns had a significant improvement in score on the mid-year CE. A comprehensive bedside cardiopulmonary physical diagnosis curriculum improved trainee attitudes, confidence and skill in the cardiopulmonary examination. These results provide an opportunity to re-examine the way physical examination is taught and assessed in residency training programs.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.
2015-01-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
Evaluation of Cartosat-1 Multi-Scale Digital Surface Modelling Over France
Gianinetto, Marco
2009-01-01
On 5 May 2005, the Indian Space Research Organization launched Cartosat-1, the eleventh satellite of its constellation, dedicated to the stereo viewing of the Earth's surface for terrain modeling and large-scale mapping, from the Satish Dhawan Space Centre (India). In early 2006, the Indian Space Research Organization started the Cartosat-1 Scientific Assessment Programme, jointly established with the International Society for Photogrammetry and Remote Sensing. Within this framework, this study evaluated the capabilities of digital surface modeling from Cartosat-1 stereo data for the French test sites of Mausanne les Alpilles and Salon de Provence. The investigation pointed out that for hilly territories it is possible to produce high-resolution digital surface models with a root mean square error less than 7.1 m and a linear error at 90% confidence level less than 9.5 m. The accuracy of the generated digital surface models also fulfilled the requirements of the French Reference 3D®, so Cartosat-1 data may be used to produce or update such kinds of products. PMID:22412311
Filling in the gaps of predeployment fleet surgical team training using a team-centered approach.
Hoang, Tuan N; Kang, Jeff; Laporta, Anthony J; Makler, Vyacheslav I; Chalut, Carissa
2013-01-01
Teamwork and successful communication are essential parts of any medical specialty, especially in the trauma setting. U.S. Navy physicians developed a course for deploying fleet surgical teams to reinforce teamwork, communication, and baseline knowledge of trauma management. The course combines 22 hours of classroom didactics along with 28 hours of hands-on simulation and cadaver-based laboratories to reinforce classroom concepts. It culminates in a 6-hour, multiwave exercise of multiple, critically injured victims of a mass casualty and uses the ?Cut Suit? (Human Worn Partial Task Surgical Simulator; Strategic Operations), which enables performance of multiple realistic surgical procedures as encountered on real casualties. Participants are graded on time taken from initial patient encounter to disposition and the number of errors made. Pre- and post-training written examinations are also given. The course is graded based on participants? evaluation of the course. The majority of the participants indicated that the course promoted teamwork, enhanced knowledge, and gave confidence. Only 51.72% of participants felt confident in dealing with trauma patients before the course, while 82.76% felt confident afterward (p = .01). Both the time spent on each patient and the number of errors made also decreased after course completion. The course was successful in improving teamwork, communication and base knowledge of all the team members. 2013.
Methods for recalibration of mass spectrometry data
Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA
2009-03-03
Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis
2017-08-01
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less
Two-phase adiabatic pressure drop experiments and modeling under micro-gravity conditions
NASA Astrophysics Data System (ADS)
Longeot, Matthieu J.; Best, Frederick R.
1995-01-01
Thermal systems for space applications based on two phase flow have several advantages over single phase systems. Two phase thermal energy management and dynamic power conversion systems have the capability of achieving high specific power levels. However, before two phase systems for space applications can be designed effectively, knowledge of the flow behavior in a ``0-g'' acceleration environment is necessary. To meet this need, two phase flow experiments were conducted by the Interphase Transport Phenomena Laboratory Group (ITP) aboard the National Aeronautics and Space Administration's (NASA) KC-135, using R12 as the working fluid. The present work is concerned with modeling of two-phase pressure drop under 0-g conditions, for bubbly and slug flow regimes. The set of data from the ITP group includes 3 bubbly points, 9 bubbly/slug points and 6 slug points. These two phase pressure drop data were collected in 1991 and 1992. A methodology to correct and validate the data was developed to achieve high levels of confidence. A homogeneous model was developed to predict the pressure drop for particular flow conditions. This model, which uses the Blasius Correlation, was found to be accurate for bubbly and bubbly/slug flows, with errors not larger than 28%. For slug flows, however, the errors are greater, attaining values up to 66%.
Glass viscosity calculation based on a global statistical modelling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurementmore » and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.« less
Alderighi, Marzia; Ferrari, Raffaello; Maghini, Irene; Del Felice, Alessandra; Masiero, Stefano
2016-11-21
Radiographic examination is the gold standard to evaluate spine curves, but ionising radiations limit routine use. Non-invasive methods, such as skin-surface goniometer (IncliMed®) should be used instead. To evaluate intra- and interrater reliability to assess sagittal curves and mobility of the spine with IncliMed®. a reliability study on agonistic football players. Thoracic kyphosis, lumbar lordosis and mobility of the spine were assessed by IncliMed®. Measurements were repeated twice by each examiner during the same session with between-rater blinding. Intrarater and interrater reliability were measured by Intraclass Correlation Coefficient (ICC), 95% Confidence Interval (CI 95%) and Standard Error of Measurement (SEM). Thirty-four healthy female football players (19.17 ± 4.52 years) were enrolled. Statistical results showed high intrarater (0.805-0.923) and interrater (0.701-0.886) reliability (ICC > 0.8). The obtained intra- and interrater SEM were low, with overall absolute intrarater values between 1.39° and 2.76° and overall interrater values between 1.71° and 4.25°. IncliMed® provides high intra- and interrater reliability in healthy subjects, with limited Standard Error of Measurement. These results encourage its use in clinical practice and scientific research.
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
Types of diagnostic errors in neurological emergencies in the emergency department.
Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V
2015-02-01
Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caillet, V; Colvill, E; Royal North Shore Hospital, St Leonards, Sydney
2016-06-15
Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physicalmore » experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use of either algorithm for clinical implementation.« less
Visual symptoms associated with refractive errors among Thangka artists of Kathmandu valley.
Dhungel, Deepa; Shrestha, Gauri Shankar
2017-12-21
Prolong near work, especially among people with uncorrected refractive error is considered a potential source of visual symptoms. The present study aims to determine the visual symptoms and the association of those with refractive errors among Thangka artists. In a descriptive cross-sectional study, 242 (46.1%) participants of 525 thangka artists examined, with age ranged between 16 years to 39 years which comprised of 112 participants with significant refractive errors and 130 absolutely emmetropic participants, were enrolled from six Thangka painting schools. The visual symptoms were assessed using a structured questionnaire consisting of nine items and scoring from 0 to 6 consecutive scales. The eye examination included detailed anterior and posterior segment examination, objective and subjective refraction, and assessment of heterophoria, vergence and accommodation. Symptoms were presented in percentage and median. Variation in distribution of participants and symptoms was analysed using the Kruskal Wallis test for mean, and the correlation with the Pearson correlation coefficient. A significance level of 0.05 was applied for 95% confidence interval. The majority of participants (65.1%) among refractive error group (REG) were above the age of 30 years, with a male predominance (61.6%), compared to the participants in the normal cohort group (NCG), where majority of them (72.3%) were below 30 years of age (72.3%) and female (51.5%). Overall, the visual symptoms are high among Thangka artists. However, blurred vision (p = 0.003) and dry eye (p = 0.004) are higher among the REG than the NCG. Females have slightly higher symptoms than males. Most of the symptoms, such as sore/aching eye (p = 0.003), feeling dry (p = 0.005) and blurred vision (p = 0.02) are significantly associated with astigmatism. Thangka artists present with significant proportion of refractive error and visual symptoms, especially among females. The most commonly reported symptoms are blurred vision, dry eye and watering of the eye. The visual symptoms are more correlated with astigmatism.
Re-assessing accumulated oxygen deficit in middle-distance runners.
Bickham, D; Le Rossignol, P; Gibbons, C; Russell, A P
2002-12-01
The purpose of this study was to re-assess the accumulated oxygen deficit (AOD), incorporating recent methodological improvements i.e., 4 min submaximal tests spread above and below the lactate threshold (LT). We Investigated the Influence of the VO2 -speed regression, on the precision of the estimated total energy demand and AOD. utilising different numbers of regression points and including measurement errors. Seven trained middle-distance runners (mean +/- SD age: 25.3 +/- 5.4y, mass: 73.7 +/- 4.3kg. VO2max 64.4 +/- 6.1 mL x kg(-1) x min(-1)) completed a VO2max, LT, 10 x 4 min exercise tests (above and below LT) and high-intensity exhaustive tests. The VO2 -speed regression was developed using 10 submaximal points and a forced y-intercept value. The average precision (measured as the width of 95% confidence Interval) for the estimated total energy demand using this regression was 7.8mL O2 Eq x kg(-1) x min(-1). There was a two-fold decrease in precision of estimated total energy demand with the Inclusion of measurement errors from the metabolic system. The mean AOD value was 43.3 mL O2 Eq x kg(-1) (upper and lower 95% CI 32.1 and 54.5mL o2 Eq x kg(-1) respectively). Converting the 95% CI for estimated total energy demand to AOD or including maximum possible measurement errors amplified the error associated with the estimated total energy demand. No significant difference in AOD variables were found, using 10,4 or 2 regression points with a forced y-intercept. For practical purposes we recommend the use of 4 submaximal values with a y-intercept. Using 95% CIs and calculating error highlighted possible error in estimating AOD. Without accurate data collection, increased variability could decrease the accuracy of the AOD as shown by a 95% CI of the AOD.
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
Teamwork and clinical error reporting among nurses in Korean hospitals.
Hwang, Jee-In; Ahn, Jeonghoon
2015-03-01
To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.
Gilmartin-Thomas, Julia Fiona-Maree; Smith, Felicity; Wolfe, Rory; Jani, Yogini
2017-07-01
No published study has been specifically designed to compare medication administration errors between original medication packaging and multi-compartment compliance aids in care homes, using direct observation. Compare the effect of original medication packaging and multi-compartment compliance aids on medication administration accuracy. Prospective observational. Ten Greater London care homes. Nurses and carers administering medications. Between October 2014 and June 2015, a pharmacist researcher directly observed solid, orally administered medications in tablet or capsule form at ten purposively sampled care homes (five only used original medication packaging and five used both multi-compartment compliance aids and original medication packaging). The medication administration error rate was calculated as the number of observed doses administered (or omitted) in error according to medication administration records, compared to the opportunities for error (total number of observed doses plus omitted doses). Over 108.4h, 41 different staff (35 nurses, 6 carers) were observed to administer medications to 823 residents during 90 medication administration rounds. A total of 2452 medication doses were observed (1385 from original medication packaging, 1067 from multi-compartment compliance aids). One hundred and seventy eight medication administration errors were identified from 2493 opportunities for error (7.1% overall medication administration error rate). A greater medication administration error rate was seen for original medication packaging than multi-compartment compliance aids (9.3% and 3.1% respectively, risk ratio (RR)=3.9, 95% confidence interval (CI) 2.4 to 6.1, p<0.001). Similar differences existed when comparing medication administration error rates between original medication packaging (from original medication packaging-only care homes) and multi-compartment compliance aids (RR=2.3, 95%CI 1.1 to 4.9, p=0.03), and between original medication packaging and multi-compartment compliance aids within care homes that used a combination of both medication administration systems (RR=4.3, 95%CI 2.7 to 6.8, p<0.001). A significant difference in error rate was not observed between use of a single or combination medication administration system (p=0.44). The significant difference in, and high overall, medication administration error rate between original medication packaging and multi-compartment compliance aids supports the use of the latter in care homes, as well as local investigation of tablet and capsule impact on medication administration errors and staff training to prevent errors occurring. As a significant difference in error rate was not observed between use of a single or combination medication administration system, common practice of using both multi-compartment compliance aids (for most medications) and original packaging (for medications with stability issues) is supported. Copyright © 2017 Elsevier Ltd. All rights reserved.
Driving Errors in Parkinson’s Disease: Moving Closer to Predicting On-Road Outcomes
Brumback, Babette; Monahan, Miriam; Malaty, Irene I.; Rodriguez, Ramon L.; Okun, Michael S.; McFarland, Nikolaus R.
2014-01-01
Age-related medical conditions such as Parkinson’s disease (PD) compromise driver fitness. Results from studies are unclear on the specific driving errors that underlie passing or failing an on-road assessment. In this study, we determined the between-group differences and quantified the on-road driving errors that predicted pass or fail on-road outcomes in 101 drivers with PD (mean age = 69.38 ± 7.43) and 138 healthy control (HC) drivers (mean age = 71.76 ± 5.08). Participants with PD had minor differences in demographics and driving habits and history but made more and different driving errors than HC participants. Drivers with PD failed the on-road test to a greater extent than HC drivers (41% vs. 9%), χ2(1) = 35.54, HC N = 138, PD N = 99, p < .001. The driving errors predicting on-road pass or fail outcomes (95% confidence interval, Nagelkerke R2 =.771) were made in visual scanning, signaling, vehicle positioning, speeding (mainly underspeeding, t(61) = 7.004, p < .001, and total errors. Although it is difficult to predict on-road outcomes, this study provides a foundation for doing so. PMID:24367958
Confidence intervals for a difference between lognormal means in cluster randomization trials.
Poirier, Julia; Zou, G Y; Koval, John
2017-04-01
Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.
Differential sea-state bias: A case study using TOPEX/POSEIDON data
NASA Technical Reports Server (NTRS)
Stewart, Robert H.; Devalla, B.
1994-01-01
We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
TU-AB-202-03: Prediction of PET Transfer Uncertainty by DIR Error Estimating Software, AUTODIRECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Phillips, J
2016-06-15
Purpose: Deformable image registration (DIR) is a powerful tool, but DIR errors can adversely affect its clinical applications. To estimate voxel-specific DIR uncertainty, a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), has been developed and validated. This work tests the ability of this software to predict uncertainty for the transfer of standard uptake values (SUV) from positron-emission tomography (PET) with DIR. Methods: Virtual phantoms are used for this study. Each phantom has a planning computed tomography (CT) image and a diagnostic PET-CT image set. A deformation was digitally applied to the diagnostic CT to create the planningmore » CT image and establish a known deformation between the images. One lung and three rectum patient datasets were employed to create the virtual phantoms. Both of these sites have difficult deformation scenarios associated with them, which can affect DIR accuracy (lung tissue sliding and changes in rectal filling). The virtual phantoms were created to simulate these scenarios by introducing discontinuities in the deformation field at the lung rectum border. The DIR algorithm from Plastimatch software was applied to these phantoms. The SUV mapping errors from the DIR were then compared to that predicted by AUTODIRECT. Results: The SUV error distributions closely followed the AUTODIRECT predicted error distribution for the 4 test cases. The minimum and maximum PET SUVs were produced from AUTODIRECT at 95% confidence interval before applying gradient-based SUV segmentation for each of these volumes. Notably, 93.5% of the target volume warped by the true deformation was included within the AUTODIRECT-predicted maximum SUV volume after the segmentation, while 78.9% of the target volume was within the target volume warped by Plastimatch. Conclusion: The AUTODIRECT framework is able to predict PET transfer uncertainty caused by DIR, which enables an understanding of the associated target volume uncertainty.« less
Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M
1999-10-01
Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.
Bayesian models for comparative analysis integrating phylogenetic uncertainty.
de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P
2012-06-28
Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.
Education Data Quality in the Third World: A Five Country Study.
ERIC Educational Resources Information Center
Chapman, David W.
1991-01-01
Reports findings from a study of the confidence expressed by ministry-level decision makers in five developing countries (i.e., Somalia, Botswana, Liberia, Yemen, and Nepal) about the quality of the national-level education data available to them and reasons for the perceived 16-40 percent error rate. (DMM)
Quality assurance, training, and certification in ozone air pollution studies
Susan Schilling; Paul Miller; Brent Takemoto
1996-01-01
Uniform, or standard, measurement methods of data are critical to projects monitoring change to forest systems. Standardized methods, with known or estimable errors, contribute greatly to the confidence associated with decisions on the basis of field data collections (Zedaker and Nicholas 1990). Quality assurance (QA) for the measurement process includes operations and...
Air pollution health studies often use outdoor concentrations as exposure surrogates. Failure to account for variability of residential infiltration of outdoor pollutants can induce exposure errors and lead to bias and incorrect confidence intervals in health effect estimates. Th...
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...
Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients
ERIC Educational Resources Information Center
Andersson, Björn; Xin, Tao
2018-01-01
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
30 CFR 817.116 - Revegetation: Standards for success.
Code of Federal Regulations, 2011 CFR
2011-07-01
... confidence interval (i.e., a one-sided test with a 0.10 alpha error). (b) Standards for success shall be... tree and shrub stocking and vegetative ground cover. Such parameters are described as follows: (i... either a programwide or a permit-specific basis. (ii) Trees and shrubs that will be used in determining...
30 CFR 816.116 - Revegetation: Standards for success.
Code of Federal Regulations, 2011 CFR
2011-07-01
... confidence interval (i.e., one-sided test with a 0.10 alpha error). (b) Standards for success shall be... tree and shrub stocking and vegetative ground cover. Such parameters are described as follows: (i... either a programwide or a permit-specific basis. (ii) Trees and shrubs that will be used in determining...
Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors
Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja
2015-01-01
Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873
Smartphone-Based Real-Time Indoor Location Tracking With 1-m Precision.
Liang, Po-Chou; Krause, Paul
2016-05-01
Monitoring the activities of daily living of the elderly at home is widely recognized as useful for the detection of new or deteriorating health conditions. However, the accuracy of existing indoor location tracking systems remains unsatisfactory. The aim of this study was, therefore, to develop a localization system that can identify a patient's real-time location in a home environment with maximum estimation error of 2 m at a 95% confidence level. A proof-of-concept system based on a sensor fusion approach was built with considerations for lower cost, reduced intrusiveness, and higher mobility, deployability, and portability. This involved the development of both a step detector using the accelerometer and compass of an iPhone 5, and a radio-based localization subsystem using a Kalman filter and received signal strength indication to tackle issues that had been identified as limiting accuracy. The results of our experiments were promising with an average estimation error of 0.47 m. We are confident that with the proposed future work, our design can be adapted to a home-like environment with a more robust localization solution.
Data assimilation method based on the constraints of confidence region
NASA Astrophysics Data System (ADS)
Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng
2018-03-01
The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan
Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less
Laboratory evaluation of the pointing stability of the ASPS Vernier System
NASA Technical Reports Server (NTRS)
1980-01-01
The annular suspension and pointing system (ASPS) is an end-mount experiment pointing system designed for use in the space shuttle. The results of the ASPS Vernier System (AVS) pointing stability tests conducted in a laboratory environment are documented. A simulated zero-G suspension was used to support the test payload in the laboratory. The AVS and the suspension were modelled and incorporated into a simulation of the laboratory test. Error sources were identified and pointing stability sensitivities were determined via simulation. Statistical predictions of laboratory test performance were derived and compared to actual laboratory test results. The predicted mean pointing stability during simulated shuttle disturbances was 1.22 arc seconds; the actual mean laboratory test pointing stability was 1.36 arc seconds. The successful prediction of laboratory test results provides increased confidence in the analytical understanding of the AVS magnetic bearing technology and allows confident prediction of in-flight performance. Computer simulations of ASPS, operating in the shuttle disturbance environment, predict in-flight pointing stability errors less than 0.01 arc seconds.
NASA Technical Reports Server (NTRS)
Ackleson, S. G.; Klemas, V.
1987-01-01
Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.
Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.
Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda
2011-03-15
We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.
Error detection and reduction in blood banking.
Motschman, T L; Moore, S B
1996-12-01
Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.
Yingyong, Penpimol
2010-11-01
Refractive error is one of the leading causes of visual impairment in children. An analysis of risk factors for refractive error is required to reduce and prevent this common eye disease. To identify the risk factors associated with refractive errors in primary school children (6-12 year old) in Nakhon Pathom province. A population-based cross-sectional analytic study was conducted between October 2008 and September 2009 in Nakhon Pathom. Refractive error, parental refractive status, and hours per week of near activities (studying, reading books, watching television, playing with video games, or working on the computer) were assessed in 377 children who participated in this study. The most common type of refractive error in primary school children was myopia. Myopic children were more likely to have parents with myopia. Children with myopia spend more time at near activities. The multivariate odds ratio (95% confidence interval)for two myopic parents was 6.37 (2.26-17.78) and for each diopter-hour per week of near work was 1.019 (1.005-1.033). Multivariate logistic regression models show no confounding effects between parental myopia and near work suggesting that each factor has an independent association with myopia. Statistical analysis by logistic regression revealed that family history of refractive error and hours of near-work were significantly associated with refractive error in primary school children.
Accuracy of Jump-Mat Systems for Measuring Jump Height.
Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G
2017-08-01
Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.
Idzinga, J C; de Jong, A L; van den Bemt, P M L A
2009-11-01
Previous studies, both in hospitals and in institutions for clients with an intellectual disability (ID), have shown that medication errors at the administration stage are frequent, especially when medication has to be administered through an enteral feeding tube. In hospitals a specially designed intervention programme has proven to be effective in reducing these feeding tube-related medication errors, but the effect of such a programme within an institution for clients with an ID is unknown. Therefore, a study was designed to measure the influence of such an intervention programme on the number of medication administration errors in clients with an ID who also have enteral feeding tubes. A before-after study design with disguised observation to document administration errors was used. The study was conducted from February to June 2008 within an institution for individuals with an ID in the Western part of The Netherlands. Included were clients with enteral feeding tubes. The intervention consisted of advice on medication administration through enteral feeding tubes by the pharmacist, a training programme and introduction of a 'medication through tube' box containing proper materials for crushing and suspending tablets. The outcome measure was the frequency of medication administration errors, comparing the pre-intervention period with the post-intervention period. A total of 245 medication administrations in six clients (by 23 nurse attendants) have been observed in the pre-intervention measurement period and 229 medication administrations in five clients (by 20 nurse attendants) have been observed in the post-intervention period. Before the intervention, 158 (64.5%) medication administration errors were observed, and after the intervention, this decreased to 69 (30.1%). Of all potential confounders and effect modifiers, only 'medication dispensed in automated dispensing system ("robot") packaging' contributed to the multivariate model; effect modification was shown for this determinant. Multilevel analysis using this multivariate model resulted in an odds ratio of 0.33 (95% confidence interval 0.13-0.71) for the error percentage in the post-intervention period compared with the pre-intervention period. The intervention was found to be effective in an institution for clients with an ID. However, additional efforts are needed to reduce the proportion of administration errors which is still high after the intervention.
Ocular findings among young men: a 12-year prevalence study of military service in Poland.
Nowak, Michal S; Jurowski, Piotr; Gos, Roman; Smigielski, Janusz
2010-08-01
To determine the prevalence of ocular diseases among young men and to assess the main ocular causes reflecting discharge from military service in Poland. A retrospective review of the medical records of 105 017 men undergoing a preliminary examination for military service during the period 1993-2004. Sample size for the study was calculated with 99% confidence within an error margin of 5%. All of the study participants were White men of European origin, most of whom live or lived in Poland. Data regarding the vision status were assessed in 1938 eyes of 969 participants. Two groups were distinguished based on the age of the participants: group I aged 18-24 years, and group II aged 25-34 years. Presented visual impairment [visual acuity (VA)<20/40)] followed by colour vision defects were the most common ocular disorders, accounting for 13.2%. There were statistically significant differences in uncorrected VA as well as in the rates of particular refractive errors in between the age groups (p<0.05). The prevalence of glaucoma and ocular hypertension was significantly higher in older participants. Six hundred and sixty-seven (68.8%) participants examined medically in the study period were accepted for military service. However, 302 (31.2%) failed their examination and were temporarily or permanently discharged from duty. Fifty-two of them (17.2%) were discharged because of various ocular disorders. The most common causes were high refractive errors, which accounted for 38.5% of all the ocular discharges, followed by chronic and recurrent diseases of the posterior segment of the eye, which accounted for 19.2%. The prevalence of ocular disorders among young men in an unselected military population was closer to the results obtained in other population-based studies comprising both men and women in the same age group. High refractive errors followed by chronic and recurrent diseases of the posterior segment of the eye are important causes of medical discharges from military service in Poland.
Test Method Variability in Slow Crack Growth Properties of Sealing Glasses
NASA Technical Reports Server (NTRS)
Salem, J. A.; Tandon, R.
2010-01-01
The crack growth properties of several sealing glasses were measured by using constant stress rate testing in 2 and 95 percent RH (relative humidity). Crack growth parameters measured in high humidity are systematically smaller (n and B) than those measured in low humidity, and crack velocities for dry environments are 100x lower than for wet environments. The crack velocity is very sensitive to small changes in RH at low RH. Biaxial and uniaxial stress states produced similar parameters. Confidence intervals on crack growth parameters that were estimated from propagation of errors solutions were comparable to those from Monte Carlo simulation. Use of scratch-like and indentation flaws produced similar crack growth parameters when residual stresses were considered.
Correcting systematic bias and instrument measurement drift with mzRefinery
Gibbons, Bryson C.; Chambers, Matthew C.; Monroe, Matthew E.; ...
2015-08-04
Systematic bias in mass measurement adversely affects data quality and negates the advantages of high precision instruments. We introduce the mzRefinery tool into the ProteoWizard package for calibration of mass spectrometry data files. Using confident peptide spectrum matches, three different calibration methods are explored and the optimal transform function is chosen. After calibration, systematic bias is removed and the mass measurement errors are centered at zero ppm. Because it is part of the ProteoWizard package, mzRefinery can read and write a wide variety of file formats. In conclusion, we report on availability; the mzRefinery tool is part of msConvert, availablemore » with the ProteoWizard open source package at http://proteowizard.sourceforge.net/« less
Error correction and diversity analysis of population mixtures determined by NGS
Burroughs, Nigel J.; Evans, David J.; Ryabov, Eugene V.
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site. PMID:25405074
Pain, Liza A M; Baker, Ross; Sohail, Qazi Zain; Richardson, Denyse; Zabjek, Karl; Mogk, Jeremy P M; Agur, Anne M R
2018-03-23
Altered three-dimensional (3D) joint kinematics can contribute to shoulder pathology, including post-stroke shoulder pain. Reliable assessment methods enable comparative studies between asymptomatic shoulders of healthy subjects and painful shoulders of post-stroke subjects, and could inform treatment planning for post-stroke shoulder pain. The study purpose was to establish intra-rater test-retest reliability and within-subject repeatability of a palpation/digitization protocol, which assesses 3D clavicular/scapular/humeral rotations, in asymptomatic and painful post-stroke shoulders. Repeated measurements of 3D clavicular/scapular/humeral joint/segment rotations were obtained using palpation/digitization in 32 asymptomatic and six painful post-stroke shoulders during four reaching postures (rest/flexion/abduction/external rotation). Intra-class correlation coefficients (ICCs), standard error of the measurement and 95% confidence intervals were calculated. All ICC values indicated high to very high test-retest reliability (≥0.70), with lower reliability for scapular anterior/posterior tilt during external rotation in asymptomatic subjects, and scapular medial/lateral rotation, humeral horizontal abduction/adduction and axial rotation during abduction in post-stroke subjects. All standard error of measurement values demonstrated within-subject repeatability error ≤5° for all clavicular/scapular/humeral joint/segment rotations (asymptomatic ≤3.75°; post-stroke ≤5.0°), except for humeral axial rotation (asymptomatic ≤5°; post-stroke ≤15°). This noninvasive, clinically feasible palpation/digitization protocol was reliable and repeatable in asymptomatic shoulders, and in a smaller sample of painful post-stroke shoulders. Implications for Rehabilitation In the clinical setting, a reliable and repeatable noninvasive method for assessment of three-dimensional (3D) clavicular/scapular/humeral joint orientation and range of motion (ROM) is currently required. The established reliability and repeatability of this proposed palpation/digitization protocol will enable comparative 3D ROM studies between asymptomatic and post-stroke shoulders, which will further inform treatment planning. Intra-rater test-retest repeatability, which is measured by the standard error of the measure, indicates the range of error associated with a single test measure. Therefore, clinicians can use the standard error of the measure to determine the "true" differences between pre-treatment and post-treatment test scores.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.
Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J
2016-03-01
Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Development of a bio-magnetic measurement system and sensor configuration analysis for rats
NASA Astrophysics Data System (ADS)
Kim, Ji-Eun; Kim, In-Seon; Kim, Kiwoong; Lim, Sanghyun; Kwon, Hyukchan; Kang, Chan Seok; Ahn, San; Yu, Kwon Kyu; Lee, Yong-Ho
2017-04-01
Magnetoencephalography (MEG) based on superconducting quantum interference devices enables the measurement of very weak magnetic fields (10-1000 fT) generated from the human or animal brain. In this article, we introduce a small MEG system that we developed specifically for use with rats. Our system has the following characteristics: (1) variable distance between the pick-up coil and outer Dewar bottom (˜5 mm), (2) small pick-up coil (4 mm) for high spatial resolution, (3) good field sensitivity (45 ˜ 80 fT /cm/√{Hz} ) , (4) the sensor interval satisfies the Nyquist spatial sampling theorem, and (5) small source localization error for the region to be investigated. To reduce source localization error, it is necessary to establish an optimal sensor layout. To this end, we simulated confidence volumes at each point on a grid on the surface of a virtual rat head. In this simulation, we used locally fitted spheres as model rat heads. This enabled us to consider more realistic volume currents. We constrained the model such that the dipoles could have only four possible orientations: the x- and y-axes from the original coordinates, and two tangentially layered dipoles (local x- and y-axes) in the locally fitted spheres. We considered the confidence volumes according to the sensor layout and dipole orientation and positions. We then conducted a preliminary test with a 4-channel MEG system prior to manufacturing the multi-channel system. Using the 4-channel MEG system, we measured rat magnetocardiograms. We obtained well defined P-, QRS-, and T-waves in rats with a maximum value of 15 pT/cm. Finally, we measured auditory evoked fields and steady state auditory evoked fields with maximum values 400 fT/cm and 250 fT/cm, respectively.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Benchmarking observational uncertainties for hydrology (Invited)
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.
2013-12-01
There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.
Cell-phone use diminishes self-awareness of impaired driving.
Sanbonmatsu, David M; Strayer, David L; Biondi, Francesco; Behrends, Arwen A; Moore, Shannon M
2016-04-01
Multitasking diminishes the self-awareness of performance that is often essential for self-regulation and self-knowledge. Participants drove in a simulator while either talking or not talking on a hands-free cell phone. Following previous research, participants who talked on a cell phone made more serious driving errors than control participants who did not use a phone while driving. Control participants' assessments of the safeness of their driving and general ability to drive safely while distracted were negatively correlated with the actual number of errors made when they were driving. By contrast, cell-phone participants' assessments of the safeness of their driving and confidence in their driving abilities were uncorrelated with their actual errors. Thus, talking on a cell phone not only diminished the safeness of participants' driving, it diminished their awareness of the safeness of their driving.
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
The prevalence and causes of visual impairment in seven-year-old children.
Ghaderi, Soraya; Hashemi, Hassan; Jafarzadehpur, Ebrahim; Yekta, Abbasali; Ostadimoghaddam, Hadi; Mirzajani, Ali; Khabazkhoob, Mehdi
2018-05-01
To report the prevalence and causes of visual impairment in seven-year-old children in Iran and its relationship with socio-economic conditions. In a cross-sectional population-based study, first-grade students in the primary schools of eight cities in the country were randomly selected from different geographic locations using multistage cluster sampling. The examinations included visual acuity measurement, ocular motility evaluation, and cycloplegic and non-cycloplegic refraction. Using the definitions of the World Health Organization (presenting visual acuity less than or equal to 6/18 in the better eye) to estimate the prevalence of vision impairment, the present study reported presenting visual impairment in seven-year-old children. Of 4,614 selected students, 4,106 students participated in the study (response rate 89 per cent), of whom 2,127 (51.8 per cent) were male. The prevalence of visual impairment according to a visual acuity of 6/18 was 0.341 per cent (95 per cent confidence interval 0.187-0.571); 1.34 per cent (95 per cent confidence interval 1.011-1.74) of children had visual impairment according to a visual acuity of 6/18 in at least one eye. Sixty-six (1.6 per cent) and 23 (0.24 per cent) children had visual impairment according to a visual acuity of 6/12 in the worse and better eye, respectively. The most common causes of visual impairment were refractive errors (81.8 per cent) and amblyopia (14.5 per cent). Among different types of refractive errors, astigmatism was the main refractive error leading to visual impairment. According to the concentration index, the distribution of visual impairment in children from low-income families was higher. This study revealed a high prevalence of visual impairment in a representative sample of seven-year-old Iranian children. Astigmatism and amblyopia were the most common causes of visual impairment. The distribution of visual impairment was higher in children from low-income families. Cost-effective strategies are needed to address these easily treatable causes of visual impairment. © 2017 Optometry Australia.
NASA Astrophysics Data System (ADS)
Clough, B.; Russell, M.; Domke, G. M.; Woodall, C. W.
2016-12-01
Uncertainty estimates are needed to establish confidence in national forest carbon stocks and to verify changes reported to the United Nations Framework Convention on Climate Change. Good practice guidance from the Intergovernmental Panel on Climate Change stipulates that uncertainty assessments should neither exaggerate nor underestimate the actual error within carbon stocks, yet methodological guidance for forests has been hampered by limited understanding of how complex dynamics give rise to errors across spatial scales (i.e., individuals to continents). This talk highlights efforts to develop a multi-scale, data-driven framework for assessing uncertainty within the United States (US) forest carbon inventory, and focuses on challenges and opportunities for improving the precision of national forest carbon stock estimates. Central to our approach is the calibration of allometric models with a newly established legacy biomass database for North American tree species, and the use of hierarchical models to link these data with the Forest Inventory and Analysis (FIA) database as well as remote sensing datasets. Our work suggests substantial risk for misestimating key sources of uncertainty including: (1) attributing more confidence in allometric models than what is warranted by the best available data; (2) failing to capture heterogeneity in biomass stocks due to environmental variation at regional scales; and (3) ignoring spatial autocorrelation and other random effects that are characteristic of national forest inventory data. Our results suggest these sources of error may be much higher than is generally assumed, though these results must be understood with the limited scope and availability of appropriate calibration data in mind. In addition to reporting on important sources of uncertainty, this talk will discuss opportunities to improve the precision of national forest carbon stocks that are motivated by our use of data-driven forecasting including: (1) improving the taxonomic and geographic scope of available biomass data; (2) direct attribution of landscape-level heterogeneity in biomass stocks to specific ecological processes; and (3) integration of expert opinion and meta-analysis to lessen the influence of often highly variable datasets on biomass stock forecasts.
Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y
2011-05-15
There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference
Olea, R.A.; Pardo-Iguzquiza, E.
2011-01-01
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.
Adult Science Learners' Mathematical Mistakes: An Analysis of Responses to Computer-Marked Questions
ERIC Educational Resources Information Center
Jordan, Sally
2014-01-01
Inspection of thousands of student responses to computer-marked assessment questions has brought insight into the errors made by adult distance learners of science. Most of the questions analysed were in summative use and required students to construct their own response. Both of these things increased confidence in the reliability of the…
ERIC Educational Resources Information Center
Padilla, Miguel A.; Divers, Jasmin
2016-01-01
Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…
Author Correction: Phase-resolved X-ray polarimetry of the Crab pulsar with the AstroSat CZT Imager
NASA Astrophysics Data System (ADS)
Vadawale, S. V.; Chattopadhyay, T.; Mithun, N. P. S.; Rao, A. R.; Bhattacharya, D.; Vibhute, A.; Bhalerao, V. B.; Dewangan, G. C.; Misra, R.; Paul, B.; Basu, A.; Joshi, B. C.; Sreekumar, S.; Samuel, E.; Priya, P.; Vinod, P.; Seetha, S.
2018-05-01
In the Supplementary Information file originally published for this Letter, in Supplementary Fig. 7 the error bars for the polarization fraction were provided as confidence intervals but instead should have been Bayesian credibility intervals. This has been corrected and does not alter the conclusions of the Letter in any way.
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
The Consequences of Ignoring Item Parameter Drift in Longitudinal Item Response Models
ERIC Educational Resources Information Center
Lee, Wooyeol; Cho, Sun-Joo
2017-01-01
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Feedback for reinforcement learning based brain-machine interfaces using confidence metrics.
Prins, Noeline W; Sanchez, Justin C; Prasad, Abhishek
2017-06-01
For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor's weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the 'ambiguous' region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries model) to validate proposed controller architecture. In this work, we show how the overall performance of the BMI was improved by using a threshold close to the decision boundary to reject erroneous feedback. Additionally, we show the stability of the system improved when the feedback was used with a threshold. The result of this study is a step towards making BMIs autonomous. While our method is not fully autonomous, the results demonstrate that extensive training times necessary at the beginning of each BMI session can be significantly decreased. In our approach, decoder training time was only limited to 10 trials in the first BMI session. Subsequent sessions used previous session weights to initialize the decoder. We also present a method where the use of a threshold can be applied to any decoder with a feedback signal that is less than perfect so that erroneous feedback can be avoided and the stability of the system can be increased.
Feedback for reinforcement learning based brain-machine interfaces using confidence metrics
NASA Astrophysics Data System (ADS)
Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek
2017-06-01
Objective. For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Approach. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor’s weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the ‘ambiguous’ region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries model) to validate proposed controller architecture. Main results. In this work, we show how the overall performance of the BMI was improved by using a threshold close to the decision boundary to reject erroneous feedback. Additionally, we show the stability of the system improved when the feedback was used with a threshold. Significance: The result of this study is a step towards making BMIs autonomous. While our method is not fully autonomous, the results demonstrate that extensive training times necessary at the beginning of each BMI session can be significantly decreased. In our approach, decoder training time was only limited to 10 trials in the first BMI session. Subsequent sessions used previous session weights to initialize the decoder. We also present a method where the use of a threshold can be applied to any decoder with a feedback signal that is less than perfect so that erroneous feedback can be avoided and the stability of the system can be increased.
Extremal optimization for Sherrington-Kirkpatrick spin glasses
NASA Astrophysics Data System (ADS)
Boettcher, S.
2005-08-01
Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.
Krishnaiah, Sannapaneni; Srinivas, Marmamula; Khanna, Rohit C; Rao, Gullapalli N
2009-01-01
To report the prevalence, risk factors and associated population attributable risk percentage (PAR) for refractive errors in the South Indian adult population. A population-based cross-sectional epidemiologic study was conducted in the Indian state of Andhra Pradesh. A multistage cluster, systematic, stratified random sampling method was used to obtain participants (n = 10293) for this study. The age-gender-area-adjusted prevalence rates in those >/=40 years of age were determined for myopia (spherical equivalent [SE] < -0.5 D) 34.6% (95% confidence interval [CI]: 33.1-36.1), high-myopia (SE < -5.0 D) 4.5% (95% CI: 3.8-5.2), hyperopia (SE > +0.5 D) 18.4% (95% CI: 17.1-19.7), astigmatism (cylinder < -0.5 D) 37.6% (95% CI: 36-39.2), and anisometropia (SE difference between right and left eyes >0.5 D) 13.0% (95% CI: 11.9-14.1). The prevalence of myopia, astigmatism, high-myopia, and anisometropia significantly increased with increasing age (all p < 0.0001). There was no gender difference in prevalence rates in any type of refractive error, though women had a significantly higher rate of hyperopia than men (p < 0.0001). Hyperopia was significantly higher among those with a higher educational level (odds ratio [OR] 2.49; 95% CI: 1.51-3.95) and significantly higher among the hypertensive group (OR 1.24; 95% CI: 1.03-1.49). The severity of lens nuclear opacity was positively associated with myopia and negatively associated with hyperopia. The prevalence of myopia in this adult Indian population is much higher than in similarly aged white populations. These results confirm the previously reported association between myopia, hyperopia, and nuclear opacity.
Pan, Chen-Wei; Wong, Tien-Yin; Lavanya, Raghavan; Wu, Ren-Yi; Zheng, Ying-Feng; Lin, Xiao-Yu; Mitchell, Paul; Aung, Tin; Saw, Seang-Mei
2011-05-16
To determine the prevalence and risk factors for refractive errors in middle-aged to elderly Singaporeans of Indian ethnicity. A population-based, cross-sectional study of Indians aged over 40 years of age residing in Southwestern Singapore was conducted. An age-stratified (10-year age group) random sampling procedure was performed to select participants. Refraction was determined by autorefraction followed by subjective refraction. Myopia was defined as spherical equivalent (SE) < -0.50 diopters (D), high myopia as SE < -5.00 D, astigmatism as cylinder < -0.50 D, hyperopia as SE > 0.50 D, and anisometropia as SE difference > 1.00 D. Prevalence was adjusted to the 2000 Singapore census. Of the 4497 persons eligible to participate, 3400 (75.6%) were examined. Complete data were available for 2805 adults with right eye refractive error and no prior cataract surgery. The age-adjusted prevalence was 28.0% (95% confidence interval [CI], 25.8-30.2) for myopia and 4.1% (95% CI, 3.3-5.0) for high myopia. There was a U-shaped relationship between myopia and increasing age. The age-adjusted prevalence was 54.9% (95% CI, 52.0-57.9) for astigmatism, 35.9% (95% CI, 33.7-38.3) for hyperopia, and 9.8% (95% CI, 8.6-11.1) for anisometropia. In a multiple logistic regression model, adults who were female, younger, taller, spent more time reading and writing per day, or had astigmatism were more likely to be myopic. Adults who were older or had myopia or diabetes mellitus had higher risk of astigmatism. In Singapore, the prevalence of myopia in Indian adults is similar to those in Malays, but lower than those in Chinese. Risk factors for myopia are similar across the three ethnic groups in Singapore.
Hashemi, H; Yekta, A; Jafarzadehpur, E; Doostdar, A; Ostadimoghaddam, H; Khabazkhoob, M
2017-08-01
PurposeTo determine the prevalence of visual impairment and blindness in underserved Iranian villages and to identify the most common cause of visual impairment and blindness.Patients and methodsMultistage cluster sampling was used to select the participants who were then invited to undergo complete examinations. Optometric examinations including visual acuity, and refraction were performed for all individuals. Ophthalmic examinations included slit-lamp biomicroscopy and ophthalmoscopy. Visual impairment was determined according to the definitions of the WHO and presenting vision.ResultsOf 3851 selected individuals, 3314 (86.5%) participated in the study. After using the exclusion criteria, the present report was prepared based on the data of 3095 participants. The mean age of the participants was 37.6±20.7 years (3-93 years). The prevalence of visual impairment and blindness was 6.43% (95% confidence interval (CI): 3.71-9.14) and 1.18% (95% CI: 0.56-1.79), respectively. The prevalence of visual impairment varied from 0.75% in participants aged less than 5 years to 38.36% in individuals above the age of 70 years. Uncorrected refractive errors and cataract were the first and second leading causes of visual impairment; moreover, cataract and refractive errors were responsible for 35.90 and 20.51% of the cases of blindness, respectively.ConclusionThe prevalence of visual impairment was markedly high in this study. Lack of access to health services was the main reason for the high prevalence of visual impairment in this study. Cataract and refractive errors are responsible for 80% of visual impairments which can be due to poverty in underserved villages.
Bifftu, Berhanu Boru; Dachew, Berihun Assefa; Tiruneh, Bewket Tadesse; Beshah, Debrework Tesgera
2016-01-01
Medication administration is the final step/phase of medication process in which its error directly affects the patient health. Due to the central role of nurses in medication administration, whether they are the source of an error, a contributor, or an observer they have the professional, legal and ethical responsibility to recognize and report. The aim of this study was to assess the prevalence of medication administration error reporting and associated factors among nurses working at The University of Gondar Referral Hospital, Northwest Ethiopia. Institution based quantitative cross - sectional study was conducted among 282 Nurses. Data were collected using semi-structured, self-administered questionnaire of the Medication Administration Errors Reporting (MAERs). Binary logistic regression with 95 % confidence interval was used to identify factors associated with medication administration errors reporting. The estimated medication administration error reporting was found to be 29.1 %. The perceived rates of medication administration errors reporting for non-intravenous related medications were ranged from 16.8 to 28.6 % and for intravenous-related from 20.6 to 33.4 %. Education status (AOR =1.38, 95 % CI: 4.009, 11.128), disagreement over time - error definition (AOR = 0.44, 95 % CI: 0.468, 0.990), administrative reason (AOR = 0.35, 95 % CI: 0.168, 0.710) and fear (AOR = 0.39, 95 % CI: 0.257, 0.838) were factors statistically significant for the refusal of reporting medication administration errors at p-value <0.05. In this study, less than one third of the study participants reported medication administration errors. Educational status, disagreement over time - error definition, administrative reason and fear were factors statistically significant for the refusal of errors reporting at p-value <0.05. Therefore, the results of this study suggest strategies that enhance the cultures of error reporting such as providing a clear definition of reportable errors and strengthen the educational status of nurses by the health care organization.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
What is the cause of confidence inflation in the Life Events Inventory (LEI) paradigm?
Von Glahn, Nicholas R; Otani, Hajime; Migita, Mai; Langford, Sara J; Hillard, Erin E
2012-01-01
Briefly imagining, paraphrasing, or explaining an event causes people to increase their confidence that this event occurred during childhood-the imagination inflation effect. The mechanisms responsible for the effect were investigated with a new paradigm. In Experiment 1, event familiarity (defined as processing fluency) was varied by asking participants to rate each event once, three times, or five times. No inflation was found, indicating that familiarity does not account for the effect. In Experiment 2, richness of memory representation was manipulated by asking participants to generate zero, three, or six details. Confidence increased from the initial to the final rating in the three- and six-detail conditions, indicating that the effect is based on reality-monitoring errors. However, greater inflation in the three-detail condition than in the six-detail condition indicated that there is a boundary condition. These results were also consistent with an alternative hypothesis, the mental workload hypothesis.
Improving confidence in environmental DNA species detection.
Jerde, Christopher L; Mahon, Andrew R
2015-05-01
Will we catch fish today? Our grandfathers' responses were usually something along the lines of, 'Probably. I've caught them here before'. One of the foundations of ecology is identifying which species are present, and where. This informs our understanding of species richness patterns, spread of invasive species, and loss of threatened and endangered species due to environmental change. However, our understanding is often lacking, particularly in aquatic environments where biodiversity remains hidden below the water's surface. The emerging field of metagenetic species surveillance is aiding our ability to rapidly determine which aquatic species are present, and where. In this issue of Molecular Ecology Resources, Ficetola et al. () provide a framework for metagenetic environmental DNA surveillance to foster the confidence of our grandfathers' fishing prowess by more rigorously evaluating the replication levels necessary to quantify detection errors and ultimately improving our confidence in aquatic species presence. © 2015 John Wiley & Sons Ltd.
Simões, Maria do Socorro Mp; Garcia, Isabel Ff; Costa, Lucíola da Cm; Lunardi, Adriana C
2018-05-01
The Life-Space Assessment (LSA) assesses mobility from the spaces that older adults go, and how often and how independent they move. Despite its increased use, LSA measurement properties remain unclear. The aim of the present study was to analyze the content validity, reliability, construct validity and interpretability of the LSA for Brazilian community-dwelling older adults. In this clinimetric study we analyzed the measurement properties (content validity, reliability, construct validity and interpretability) of the LSA administered to 80 Brazilian community-dwelling older adults. Reliability was analyzed by Cronbach's alpha (internal consistency), intraclass correlation coefficients and 95% confidence interval (reproducibility), and standard error of measurement (measurement error). Construct validity was analyzed by Pearson's correlations between the LSA and accelerometry (time in inactivity and moderate-to-vigorous activities), and interpretability was analyzed by determination of the minimal detectable change, and floor and ceiling effects. The LSA met the criteria for content validity. The Cronbach's alpha was 0.92, intraclass correlation coefficient was 0.97 (95% confidence interval 0.95-0.98) and standard error of measurement was 4.12. The LSA showed convergence with accelerometry (negative correlation with time in inactivity and positive correlation with time in moderate to vigorous activities), the minimal detectable change was 0.36 and we observed no floor or ceiling effects. The LSA showed adequate reliability, validity and interpretability for life-space mobility assessment of Brazilian community-dwelling older adults. Geriatr Gerontol Int 2018; 18: 783-789. © 2018 Japan Geriatrics Society.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Parental language and dosing errors after discharge from the pediatric emergency department.
Samuels-Kalow, Margaret E; Stack, Anne M; Porter, Stephen C
2013-09-01
Safe and effective care after discharge requires parental education in the pediatric emergency department (ED). Parent-provider communication may be more difficult with parents who have limited health literacy or English-language fluency. This study examined the relationship between language and discharge comprehension regarding medication dosing. We completed a prospective observational study of the ED discharge process using a convenience sample of English- and Spanish-speaking parents of children 2 to 24 months presenting to a single tertiary care pediatric ED with fever and/or respiratory illness. A bilingual research assistant interviewed parents to ascertain their primary language and health literacy and observed the discharge process. The primary outcome was parental demonstration of an incorrect dose of acetaminophen for the weight of his or her child. A total of 259 parent-child dyads were screened. There were 210 potential discharges, and 145 (69%) of 210 completed the postdischarge interview. Forty-six parents (32%) had an acetaminophen dosing error. Spanish-speaking parents were significantly more likely to have a dosing error (odds ratio, 3.7; 95% confidence interval, 1.6-8.1), even after adjustment for language of discharge, income, and parental health literacy (adjusted odds ratio, 6.7; 95% confidence interval, 1.4-31.7). Current ED discharge communication results in a significant disparity between English- and Spanish-speaking parents' comprehension of a crucial aspect of medication safety. These differences were not explained purely by interpretation, suggesting that interventions to improve comprehension must address factors beyond language alone.
Morales-González, María Fernanda; Galiano Gálvez, María Alejandra
2017-09-08
Our institution implemented the use of pre-designed labeling of intravenous drugs and fluids, administration routes and infusion pumps of to prevent medication errors. To evaluate the effectiveness of predesigned labeling in reducing medication errors in the preparation and administration stages of prescribed medication in patients hospitalized with invasive lines, and to characterize medication errors. This is a pre/post intervention study. Pre-intervention group: invasively administered dose from July 1st to December 31st, 2014, using traditional labeling (adhesive paper handwritten note). Post-intervention group: dose administered from January 1st to June 30th, 2015, using predesigned labeling (labeling with preset data-adhesive labels, color- grouped by drugs, labels with colors for invasive lines). Outcome: medication errors in hospitalized patients, as measured with notification form and record electronics. Tabulation/analysis Stata-10, with descriptive statistics, hypotheses testing, estimating risk with 95% confidence. In the pre-intervention group, 5,819 doses of drugs were administered invasively in 634 patients. Error rate of 1.4 x 1,000 administrations. The post-intervention group of 1088 doses comprised 8,585 patients with similar routes of administration. The error rate was 0.3 x 1,000 (p = 0.034). Patients receiving medication through an invasive route who did not use predesigned labeling had 4.6 times more risk of medication error than those who had used predesigned labels (95% CI: 1.25 to 25.4). The adult critically ill patient unit had the highest proportion of medication errors. The most frequent error was wrong dose administration. 41.2% produced harm to the patient. The use of predesigned labeling in invasive lines reduces errors in medication in the last two phases: preparation and administration.
Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R
2014-12-01
Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
Coffey, Maitreya; Thomson, Kelly; Tallett, Susan; Matlow, Anne
2010-10-01
Although experts advise disclosing medical errors to patients, individual physicians' different levels of knowledge and comfort suggest a gap between recommendations and practice. This study explored pediatric residents' knowledge and attitudes about disclosure. In 2006, the authors of this single-center, mixed-methods study surveyed 64 pediatric residents at the University of Toronto and then held three focus groups with a total of 24 of those residents. Thirty-seven (58%) residents completed questionnaires. Most agreed that medical errors are one of the most serious problems in health care, that errors should be disclosed, and that disclosure would be difficult. When shown a scenario involving a medical error, over 90% correctly identified the error, but only 40% would definitely disclose it. Most would apologize, but far fewer would acknowledge harm if it occurred or use the word "mistake." Most had witnessed or performed a disclosure, but only 40% reported receiving teaching on disclosure. Most reported experiencing negative effects of errors, including anxiety and reduced confidence. Data from the focus groups emphasized the extent to which residents consider contextual information when making decisions around disclosure. Themes included their or their team's degree of responsibility for the error versus others, quality of team relationships, training level, existence of social boundaries, and their position within a hierarchy. These findings add to the understanding of facilitators and inhibitors of error disclosure and reporting. The influence of social context warrants further study and should be considered in medical curriculum design and hospital guideline implementation.
False recollection of the role played by an actor in an event
Earles, Julie L.; Upshaw, Christin
2013-01-01
Two experiments demonstrated that eyewitnesses more frequently associate an actor with the actions of another person when those two people had appeared together in the same event, rather than in different events. This greater likelihood of binding an actor with the actions of another person from the same event was associated with high-confidence recognition judgments and “remember” responses in a remember–know task, suggesting that viewing an actor together with the actions of another person led participants to falsely recollect having seen that actor perform those actions. An analysis of age differences provided evidence that familiarity also contributed to false recognition independently of a false-recollection mechanism. In particular, older adults were more likely than young adults to falsely recognize a novel conjunction of a familiar actor and action, regardless of whether that actor and action were from the same or from different events. Older adults’ elevated rate of false recognition was associated with intermediate confidence levels, suggesting that it stemmed from increased reliance on familiarity rather than from false recollection. The implications of these results are discussed for theories of conjunction errors in memory and of unconscious transference in eyewitness testimony. PMID:23722927
Ishwaran, Hemant; Lu, Min
2018-06-04
Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.
Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni
2016-01-01
The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports. PMID:28269893
Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni
2016-01-01
The paper assesses the utility of Medtex on automating Cancer Registry notifications from narrative histology and cytology reports from the Queensland state-wide pathology information system. A corpus of 45.3 million pathology HL7 messages (including 119,581 histology and cytology reports) from a Queensland pathology repository for the year of 2009 was analysed by Medtex for cancer notification. Reports analysed by Medtex were consolidated at a patient level and compared against patients with notifiable cancers from the Queensland Oncology Repository (QOR). A stratified random sample of 1,000 patients was manually reviewed by a cancer clinical coder to analyse agreements and discrepancies. Sensitivity of 96.5% (95% confidence interval: 94.5-97.8%), specificity of 96.5% (95.3-97.4%) and positive predictive value of 83.7% (79.6-86.8%) were achieved for identifying cancer notifiable patients. Medtex achieved high sensitivity and specificity across the breadth of cancers, report types, pathology laboratories and pathologists throughout the State of Queensland. The high sensitivity also resulted in the identification of cancer patients that were not found in the QOR. High sensitivity was at the expense of positive predictive value; however, these cases may be considered as lower priority to Cancer Registries as they can be quickly reviewed. Error analysis revealed that system errors tended to be tumour stream dependent. Medtex is proving to be a promising medical text analytic system. High value cancer information can be generated through intelligent data classification and extraction on large volumes of unstructured pathology reports.
Burns, W.J.; Coe, J.A.; Kaya, B.S.; Ma, Liwang
2010-01-01
We examined elevation changes detected from two successive sets of Light Detection and Ranging (LiDAR) data in the northern Coast Range of Oregon. The first set of LiDAR data was acquired during leafon conditions and the second set during leaf-off conditions. We were able to successfully identify and map active landslides using a differential digital elevation model (DEM) created from the two LiDAR data sets, but this required the use of thresholds (0.50 and 0.75 m) to remove noise from the differential elevation data, visual pattern recognition of landslideinduced elevation changes, and supplemental QuickBird satellite imagery. After mapping, we field-verified 88 percent of the landslides that we had mapped with high confidence, but we could not detect active landslides with elevation changes of less than 0.50 m. Volumetric calculations showed that a total of about 18,100 m3 of material was missing from landslide areas, probably as a result of systematic negative elevation errors in the differential DEM and as a result of removal of material by erosion and transport. We also examined the accuracies of 285 leaf-off LiDAR elevations at four landslide sites using Global Positioning System and total station surveys. A comparison of LiDAR and survey data indicated an overall root mean square error of 0.50 m, a maximum error of 2.21 m, and a systematic error of 0.09 m. LiDAR ground-point densities were lowest in areas with young conifer forests and deciduous vegetation, which resulted in extensive interpolations of elevations in the leaf-on, bare-earth DEM. For optimal use of multi-temporal LiDAR data in forested areas, we recommend that all data sets be flown during leaf-off seasons.
Report of the Odyssey FPGA Independent Assessment Team
NASA Technical Reports Server (NTRS)
Mayer, Donald C.; Katz, Richard B.; Osborn, Jon V.; Soden, Jerry M.; Barto, R.; Day, John H. (Technical Monitor)
2001-01-01
An independent assessment team (IAT) was formed and met on April 2, 2001, at Lockheed Martin in Denver, Colorado, to aid in understanding a technical issue for the Mars Odyssey spacecraft scheduled for launch on April 7, 2001. An RP1280A field-programmable gate array (FPGA) from a lot of parts common to the SIRTF, Odyssey, and Genesis missions had failed on a SIRTF printed circuit board. A second FPGA from an earlier Odyssey circuit board was also known to have failed and was also included in the analysis by the IAT. Observations indicated an abnormally high failure rate for flight RP1280A devices (the first flight lot produced using this flow) at Lockheed Martin and the causes of these failures were not determined. Standard failure analysis techniques were applied to these parts, however, additional diagnostic techniques unique for devices of this class were not used, and the parts were prematurely submitted to a destructive physical analysis, making a determination of the root cause of failure difficult. Any of several potential failure scenarios may have caused these failures, including electrostatic discharge, electrical overstress, manufacturing defects, board design errors, board manufacturing errors, FPGA design errors, or programmer errors. Several of these mechanisms would have relatively benign consequences for disposition of the parts currently installed on boards in the Odyssey spacecraft if established as the root cause of failure. However, other potential failure mechanisms could have more dire consequences. As there is no simple way to determine the likely failure mechanisms with reasonable confidence before Odyssey launch, it is not possible for the IAT to recommend a disposition for the other parts on boards in the Odyssey spacecraft based on sound engineering principles.
Senjam, Suraj Singh; Vashist, Praveen; Gupta, Noopur; Malhotra, Sumit; Misra, Vasundhara; Bhardwaj, Amit; Gupta, Vivek
2016-05-01
To estimate the prevalence of visual impairment (VI) due to uncorrected refractive error (URE) and to assess the barriers to utilization of services in the adult urban population of Delhi. A population-based rapid assessment of VI was conducted among people aged 40 years and above in 24 randomly selected clusters of East Delhi district. Presenting visual acuity (PVA) was assessed in each eye using Snellen's "E" chart. Pinhole examination was done if PVA was <20/60 in either eye and ocular examination to ascertain the cause of VI. Barriers to utilization of services for refractive error were recorded with questionnaires. Of 2421 individuals enumerated, 2331 (96%) individuals were examined. Females were 50.7% among them. The mean age of all examined subjects was 51.32 ± 10.5 years (standard deviation). VI in either eye due to URE was present in 275 individuals (11.8%, 95% confidence interval [CI]: 10.5-13.1). URE was identified as the most common cause (53.4%) of VI. The overall prevalence of VI due to URE in the study population was 6.1% (95% CI: 5.1-7.0). The elder population as well as females were more likely to have VI due to URE (odds ratio [OR] = 12.3; P < 0.001 and OR = 1.5; P < 0.02). Lack of felt need was the most common reported barrier (31.5%). The prevalence of VI due to URE among the urban adult population of Delhi is still high despite the availability of abundant eye care facilities. The majority of reported barriers are related to human behavior and attitude toward the refractive error. Understanding these aspects will help in planning appropriate strategies to eliminate VI due to URE.
Accurate typing of short tandem repeats from genome-wide sequencing data and its applications.
Fungtammasan, Arkarachai; Ananda, Guruprasad; Hile, Suzanne E; Su, Marcia Shu-Wei; Sun, Chen; Harris, Robert; Medvedev, Paul; Eckert, Kristin; Makova, Kateryna D
2015-05-01
Short tandem repeats (STRs) are implicated in dozens of human genetic diseases and contribute significantly to genome variation and instability. Yet profiling STRs from short-read sequencing data is challenging because of their high sequencing error rates. Here, we developed STR-FM, short tandem repeat profiling using flank-based mapping, a computational pipeline that can detect the full spectrum of STR alleles from short-read data, can adapt to emerging read-mapping algorithms, and can be applied to heterogeneous genetic samples (e.g., tumors, viruses, and genomes of organelles). We used STR-FM to study STR error rates and patterns in publicly available human and in-house generated ultradeep plasmid sequencing data sets. We discovered that STRs sequenced with a PCR-free protocol have up to ninefold fewer errors than those sequenced with a PCR-containing protocol. We constructed an error correction model for genotyping STRs that can distinguish heterozygous alleles containing STRs with consecutive repeat numbers. Applying our model and pipeline to Illumina sequencing data with 100-bp reads, we could confidently genotype several disease-related long trinucleotide STRs. Utilizing this pipeline, for the first time we determined the genome-wide STR germline mutation rate from a deeply sequenced human pedigree. Additionally, we built a tool that recommends minimal sequencing depth for accurate STR genotyping, depending on repeat length and sequencing read length. The required read depth increases with STR length and is lower for a PCR-free protocol. This suite of tools addresses the pressing challenges surrounding STR genotyping, and thus is of wide interest to researchers investigating disease-related STRs and STR evolution. © 2015 Fungtammasan et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Technical Reports Server (NTRS)
Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald
2007-01-01
In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.
Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al
2018-05-07
To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P < 0.05), including disruption to home life, pressure to meet deadlines, difficulties with colleagues, excessive workload, income over 10 000 riyals and compulsory night/weekend call duties either some or all of the time. Although not statistically significant, HCPs who reported overall stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.
Rußig, Lorenz L; Schulze, Ralf K W
2013-12-01
The goal of the present study was to develop a theoretical analysis of errors in implant position, which can occur owing to minute registration errors of a reference marker in a cone beam computed tomography volume when inserting an implant with a surgical stent. A virtual dental-arch model was created using anatomic data derived from the literature. Basic trigonometry was used to compute effects of defined minute registration errors of only voxel size. The errors occurring at the implant's neck and apex both in horizontal as in vertical direction were computed for mean ±95%-confidence intervals of jaw width and length and typical implant lengths (8, 10 and 12 mm). Largest errors occur in vertical direction for larger voxel sizes and for greater arch dimensions. For a 10 mm implant in the frontal region, these can amount to a mean of 0.716 mm (range: 0.201-1.533 mm). Horizontal errors at the neck are negligible, with a mean overall deviation of 0.009 mm (range: 0.001-0.034 mm). Errors increase with distance to the registration marker and voxel size and are affected by implant length. Our study shows that minute and realistic errors occurring in the automated registration of a reference object have an impact on the implant's position and angulation. These errors occur in the fundamental initial step in the long planning chain; thus, they are critical and should be made aware to users of these systems. © 2012 John Wiley & Sons A/S.
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Lamba, Sangeeta; Wilson, Bryan; Natal, Brenda; Nagurka, Roxanne; Anana, Michael; Sule, Harsh
2016-01-01
An increasing number of students rank Emergency Medicine (EM) as a top specialty choice, requiring medical schools to provide adequate exposure to EM. The Core Entrustable Professional Activities (EPAs) for Entering Residency by the Association of American Medical Colleges combined with the Milestone Project for EM residency training has attempted to standardize the undergraduate and graduate medical education goals. However, it remains unclear as to how the EPAs correlate to the milestones, and who owns the process of ensuring that an entering EM resident has competency at a certain minimum level. Recent trends establishing specialty-specific boot camps prepare students for residency and address the variability of skills of students coming from different medical schools. Our project's goal was therefore to perform a needs assessment to inform the design of an EM boot camp curriculum. Toward this goal, we 1) mapped the core EPAs for graduating medical students to the EM residency Level 1 milestones in order to identify the possible gaps/needs and 2) conducted a pilot procedure workshop that was designed to address some of the identified gaps/needs in procedural skills. In order to inform the curriculum of an EM boot camp, we used a systematic approach to 1) identify gaps between the EPAs and EM milestones (Level 1) and 2) determine what essential and supplemental competencies/skills an incoming EM resident should ideally possess. We then piloted a 1-day, three-station advanced ABCs procedure workshop based on the identified needs. A pre-workshop test and survey assessed knowledge, preparedness, confidence, and perceived competence. A post-workshop survey evaluated the program, and a posttest combined with psychomotor skills test using three simulation cases assessed students' skills. Students (n=9) reported increased confidence in the following procedures: intubation (1.5-2.1), thoracostomy (1.1-1.9), and central venous catheterization (1.3-2) (a three-point Likert-type scale, with 1= not yet confident/able to perform with supervision to 3= confident/able to perform without supervision). Psychomotor skills testing showed on average, 26% of students required verbal prompting with performance errors, 48% with minor performance errors, and 26% worked independently without performance errors. All participants reported: 1) increased knowledge and confidence in covered topics and 2) overall satisfaction with simulation experience. Mapping the Core EPAs for Entering Residency to the EM milestones at Level 1 identifies educational gaps for graduating medical students seeking a career in EM. Educators designing EM boot camps for medical students should consider these identified gaps, procedures, and clinical conditions during the development of a core standardized curriculum.
Endoscopic Stone Measurement During Ureteroscopy.
Ludwig, Wesley W; Lim, Sunghwan; Stoianovici, Dan; Matlaga, Brian R
2018-01-01
Currently, stone size cannot be accurately measured while performing flexible ureteroscopy (URS). We developed novel software for ureteroscopic, stone size measurement, and then evaluated its performance. A novel application capable of measuring stone fragment size, based on the known distance of the basket tip in the ureteroscope's visual field, was designed and calibrated in a laboratory setting. Complete URS procedures were recorded and 30 stone fragments were extracted and measured using digital calipers. The novel software program was applied to the recorded URS footage to obtain ureteroscope-derived stone size measurements. These ureteroscope-derived measurements were then compared with the actual-measured fragment size. The median longitudinal and transversal errors were 0.14 mm (95% confidence interval [CI] 0.1, 0.18) and 0.09 mm (95% CI 0.02, 0.15), respectively. The overall software accuracy and precision were 0.17 and 0.15 mm, respectively. The longitudinal and transversal measurements obtained by the software and digital calipers were highly correlated (r = 0.97 and 0.93). Neither stone size nor stone type was correlated with error measurements. This novel method and software reliably measured stone fragment size during URS. The software ultimately has the potential to make URS safer and more efficient.
A systematic uncertainty analysis for liner impedance eduction technology
NASA Astrophysics Data System (ADS)
Zhou, Lin; Bodén, Hans
2015-11-01
The so-called impedance eduction technology is widely used for obtaining acoustic properties of liners used in aircraft engines. The measurement uncertainties for this technology are still not well understood though it is essential for data quality assessment and model validation. A systematic framework based on multivariate analysis is presented in this paper to provide 95 percent confidence interval uncertainty estimates in the process of impedance eduction. The analysis is made using a single mode straightforward method based on transmission coefficients involving the classic Ingard-Myers boundary condition. The multivariate technique makes it possible to obtain an uncertainty analysis for the possibly correlated real and imaginary parts of the complex quantities. The results show that the errors in impedance results at low frequency mainly depend on the variability of transmission coefficients, while the mean Mach number accuracy is the most important source of error at high frequencies. The effect of Mach numbers used in the wave dispersion equation and in the Ingard-Myers boundary condition has been separated for comparison of the outcome of impedance eduction. A local Mach number based on friction velocity is suggested as a way to reduce the inconsistencies found when estimating impedance using upstream and downstream acoustic excitation.
Aircrew perceived stress: examining crew performance, crew position and captains personality.
Bowles, S; Ursin, H; Picano, J
2000-11-01
This study was conducted at NASA Ames Research Center as a part of a larger research project assessing the impact of captain's personality on crew performance and perceived stress in 24 air transport crews (5). Three different personality types for captains were classified based on a previous cluster analysis (3). Crews were comprised of three crewmembers: captain, first officer, and second officer/flight engineer. A total of 72 pilots completed a 1.5-d full-mission simulation of airline operations including emergency situations in the Ames Manned Vehicle System Research Facility B-727 simulator. Crewmembers were tested for perceived stress on four dimensions of the NASA Task Load Index after each of five flight legs. Crews were divided into three groups based on rankings from combined error and rating scores. High performance crews (who committed the least errors in flight) reported experiencing less stress in simulated flight than either low or medium crews. When comparing crew positions for perceived stress over all the simulated flights no significant differences were found. However, the crews led by the "Right Stuff" (e.g., active, warm, confident, competitive, and preferring excellence and challenges) personality type captains typically reported less stress than crewmembers led by other personality types.
NASA Astrophysics Data System (ADS)
Zimmermann, Jesko; Jones, Michael
2016-04-01
Agriculture can be significant contributor to greenhouse gas emissions, this is especially prevalent in Ireland where the agricultural sector accounts for a third of total emissions. The high emissions are linked to both the importance of agriculture in the Irish economy and the focus on dairy and beef production. In order to reduce emissions three main categories are explored: (1) reduction of methane emissions from cattle, (2) reduction of nitrous oxide emissions from fertilisation, and (3) fostering the carbon sequestration potential of soils. The presented research focuses on the latter two categories, especially changes in fertiliser amount and composition. Soil properties and climate conditions measured at the four experimental sites (two silage and two spring barley) were used to parameterise four biogeochemical models (DayCent, ECOSSE, DNDC 9.4, and DNDC 9.5). All sites had a range of different fertiliser regimes applied. This included changes in amount (0 to 500 kg N/ha on grassland and 0 to 200 kg N/ha on arable fields), fertiliser type (calcium ammonium nitrate and urea), and added inhibitors (the nitrification inhibitor DCD, and the urease inhibitor Agrotain). Overall, 20 different treatments were applied to the grassland sites, and 17 to the arable sites. Nitrous oxide emissions, measured in 2013 and 2014 at all sites using closed chambers, were made available to validate model results for these emissions. To assess model performance for the daily measurements, the Root Mean Square Error (RMSE) was compared to the measured 95% confidence interval of the measured data (RMSE95). Bias was tested comparing the relative error (RE) the 95 % confidence interval of the relative error (RE95). Preliminary results show mixed model performance, depending on the model, site, and the fertiliser regime. However, with the exception of urea fertilisation and added inhibitors, all scenarios were reproduced by at least one model with no statistically significant total error (RMSE < RMSE95) or bias (RE< RE95). A general trend observed was that model performance declined with increased fertilisation rates. Overall, DayCent showed the best performance, however it does not provide the possibility to model the addition urease inhibitors. The results suggest that modelling changes in fertiliser regime on a large scale may require a multi-model approach to assure best performance. Ultimately, the research aims to develop a GIS based platform to apply such an approach on a regional scale.
Pre- and Postcycloplegic Refractions in Children and Adolescents
Zhu, Dan; Wang, Yan; Yang, Xianrong; Yang, Dayong; Guo, Kai; Guo, Yuanyuan; Jing, Xinxia; Pan, Chen-Wei
2016-01-01
Purpose To determine the difference between cycloplegic and non-cycloplegic refractive error and its associated factors in Chinese children and adolescents with a high prevalence of myopia. Methods A school-based study including 1565 students aged 6 to 21 years was conducted in 2013 in Ejina, Inner Mongolia, China. Comprehensive eye examinations were performed. Pre-and postcycloplegic refractive error were measured using an auto-refractor. For cycloplegic refraction, one drop of topical 1.0% cyclopentolate was administered to each eye twice with a 5-minute interval and a third drop was administered 15 minutes after the second drop if the pupil size was less than 6 mm or if the pupillary light reflex was still present. Results Two drops of cyclopentolate were found to be sufficient in 59% of the study participants while the other 41% need an additional drop. The prevalence of myopia was 89.5% in participants aged over 12 years and 68.6% in those aged 12 years or younger (P<0.001). When myopia was defined as spherical equivalent (SE) of less than -0.5 diopter (D), the prevalence estimates were 76.7% (95% confidence interval [CI] 74.6–78.8) and 54.1% (95%CI 51.6–56.6) before and after cycloplegic refraction, respectively. When hyperopia was defined as SE of more than 0.5D, the prevalence was only 2.8% (95%CI 1.9–3.6) before cycloplegic refraction while it was 15.5% (95%CI 13.7–17.3) after cycloplegic refraction. Increased difference between cycloplegic and non-cycloplegic refractive error was associated with decreased intraocular pressures (P = 0.01). Conclusions Lack of cycloplegia in refractive error measurement was associated with significant misclassifications in both myopia and hyperopia among Chinese children and adolescents. Decreased intraocular pressure was related to a greater difference between cycloplegic and non-cycloplegic refractive error. PMID:27907165
Pre- and Postcycloplegic Refractions in Children and Adolescents.
Zhu, Dan; Wang, Yan; Yang, Xianrong; Yang, Dayong; Guo, Kai; Guo, Yuanyuan; Jing, Xinxia; Pan, Chen-Wei
2016-01-01
To determine the difference between cycloplegic and non-cycloplegic refractive error and its associated factors in Chinese children and adolescents with a high prevalence of myopia. A school-based study including 1565 students aged 6 to 21 years was conducted in 2013 in Ejina, Inner Mongolia, China. Comprehensive eye examinations were performed. Pre-and postcycloplegic refractive error were measured using an auto-refractor. For cycloplegic refraction, one drop of topical 1.0% cyclopentolate was administered to each eye twice with a 5-minute interval and a third drop was administered 15 minutes after the second drop if the pupil size was less than 6 mm or if the pupillary light reflex was still present. Two drops of cyclopentolate were found to be sufficient in 59% of the study participants while the other 41% need an additional drop. The prevalence of myopia was 89.5% in participants aged over 12 years and 68.6% in those aged 12 years or younger (P<0.001). When myopia was defined as spherical equivalent (SE) of less than -0.5 diopter (D), the prevalence estimates were 76.7% (95% confidence interval [CI] 74.6-78.8) and 54.1% (95%CI 51.6-56.6) before and after cycloplegic refraction, respectively. When hyperopia was defined as SE of more than 0.5D, the prevalence was only 2.8% (95%CI 1.9-3.6) before cycloplegic refraction while it was 15.5% (95%CI 13.7-17.3) after cycloplegic refraction. Increased difference between cycloplegic and non-cycloplegic refractive error was associated with decreased intraocular pressures (P = 0.01). Lack of cycloplegia in refractive error measurement was associated with significant misclassifications in both myopia and hyperopia among Chinese children and adolescents. Decreased intraocular pressure was related to a greater difference between cycloplegic and non-cycloplegic refractive error.
NASA Astrophysics Data System (ADS)
Marín, Julio C.; Pozo, Diana; Curé, Michel
2015-01-01
In this work, we describe a method to estimate the precipitable water vapor (PWV) from Geostationary Observational Environmental Satellite (GOES) data at high altitude sites. The method was applied at Atacama Pathfinder Experiment (APEX) and Cerro Toco sites, located above 5000 m altitude in the Chajnantor plateau, in the north of Chile. It was validated using GOES-12 satellite data over the range 0-1.2 mm since submillimeter/millimeter astronomical observations are only useful within this PWV range. The PWV estimated from GOES and the Final Analyses (FNL) at APEX for 2007 and 2009 show root mean square error values of 0.23 mm and 0.36 mm over the ranges 0-0.4 mm and 0.4-1.2 mm, respectively. However, absolute relative errors of 51% and 33% were shown over these PWV ranges, respectively. We recommend using high-resolution thermodynamic profiles from the Global Forecast System (GFS) model to estimate the PWV from GOES data since they are available every three hours and at an earlier time than the FNL data. The estimated PWV from GOES/GFS agrees better with the observed PWV at both sites during night time. The largest errors are shown during daytime. Short-term PWV forecasts were implemented at both sites, applying a simple persistence method to the PWV estimated from GOES/GFS. The 12 h and 24 h PWV forecasts evaluated from August to October 2009 indicates that 25% of them show a very good agreement with observations whereas 50% of them show reasonably good agreement with observations. Transmission uncertainties calculated for PWV estimations and forecasts over the studied sites are larger over the range 0-0.4 mm than over the range 0.4-1.2 mm. Thus, the method can be used over the latter interval with more confidence.
Wu, Nicholas C.; Young, Arthur P.; Al-Mawsawi, Laith Q.; Olson, C. Anders; Feng, Jun; Qi, Hangfei; Luan, Harding H.; Li, Xinmin; Wu, Ting-Ting
2014-01-01
ABSTRACT Viral proteins often display several functions which require multiple assays to dissect their genetic basis. Here, we describe a systematic approach to screen for loss-of-function mutations that confer a fitness disadvantage under a specified growth condition. Our methodology was achieved by genetically monitoring a mutant library under two growth conditions, with and without interferon, by deep sequencing. We employed a molecular tagging technique to distinguish true mutations from sequencing error. This approach enabled us to identify mutations that were negatively selected against, in addition to those that were positively selected for. Using this technique, we identified loss-of-function mutations in the influenza A virus NS segment that were sensitive to type I interferon in a high-throughput fashion. Mechanistic characterization further showed that a single substitution, D92Y, resulted in the inability of NS to inhibit RIG-I ubiquitination. The approach described in this study can be applied under any specified condition for any virus that can be genetically manipulated. IMPORTANCE Traditional genetics focuses on a single genotype-phenotype relationship, whereas high-throughput genetics permits phenotypic characterization of numerous mutants in parallel. High-throughput genetics often involves monitoring of a mutant library with deep sequencing. However, deep sequencing suffers from a high error rate (∼0.1 to 1%), which is usually higher than the occurrence frequency for individual point mutations within a mutant library. Therefore, only mutations that confer a fitness advantage can be identified with confidence due to an enrichment in the occurrence frequency. In contrast, it is impossible to identify deleterious mutations using most next-generation sequencing techniques. In this study, we have applied a molecular tagging technique to distinguish true mutations from sequencing errors. It enabled us to identify mutations that underwent negative selection, in addition to mutations that experienced positive selection. This study provides a proof of concept by screening for loss-of-function mutations on the influenza A virus NS segment that are involved in its anti-interferon activity. PMID:24965464
Refractive errors among children, adolescents and adults attending eye clinics in Mexico.
Gomez-Salazar, Francisco; Campos-Romero, Abraham; Gomez-Campaña, Humberto; Cruz-Zamudio, Cinthia; Chaidez-Felix, Mariano; Leon-Sicairos, Nidia; Velazquez-Roman, Jorge; Flores-Villaseñor, Hector; Muro-Amador, Secundino; Guadron-Llanos, Alma Marlene; Martinez-Garcia, Javier J; Murillo-Llanes, Joel; Sanchez-Cuen, Jaime; Llausas-Vargas, Alejando; Alapizco-Castro, Gerardo; Irineo-Cabrales, Ana; Graue-Hernandez, Enrique; Ramirez-Luquin, Tito; Canizalez-Roman, Adrian
2017-01-01
To assess the proportion of refractive errors in the Mexican population that visited primary care optometry clinics in fourteen states of Mexico. Refractive data from 676 856 patients aged 6 to 90y were collected from optometry clinics in fourteen states of Mexico between 2014 and 2015. The refractive errors were classified by the spherical equivalent (SE), as follows: sphere+½ cylinder. Myopia (SE>-0.50 D), hyperopia (SE>+0.50 D), emmetropia (-0.50≤SE≤+0.50), and astigmatism alone (cylinder≥-0.25 D). A negative cylinder was selected as a notation. The proportion (95% confidence interval) among all of the subjects was hyperopia 21.0% (20.9-21.0), emmetropia 40.7% (40.5-40.8), myopia 24.8% (24.7-24.9) and astigmatism alone 13.5% (13.4-13.5). Myopia was the most common refractive error and frequency seemed to increase among the young population (10 to 29 years old), however, hyperopia increased among the aging population (40 to 79 years old), and astigmatism alone showed a decreasing trend with age (6 to 90y; from 19.7% to 10.8%). There was a relationship between age and all refractive errors (approximately 60%, aged 50 and older). The proportion of any clinically important refractive error was higher in males (61.2%) than in females (58.3%; P <0.0001). From fourteen states that collected information, the proportion of refractive error showed variability in different geographical areas of Mexico. Myopia is the most common refractive error in the population studied. This study provides the first data on refractive error in Mexico. Further programs and studies must be developed to address the refractive errors needs of the Mexican population.
Refractive errors among children, adolescents and adults attending eye clinics in Mexico
Gomez-Salazar, Francisco; Campos-Romero, Abraham; Gomez-Campaña, Humberto; Cruz-Zamudio, Cinthia; Chaidez-Felix, Mariano; Leon-Sicairos, Nidia; Velazquez-Roman, Jorge; Flores-Villaseñor, Hector; Muro-Amador, Secundino; Guadron-Llanos, Alma Marlene; Martinez-Garcia, Javier J.; Murillo-Llanes, Joel; Sanchez-Cuen, Jaime; Llausas-Vargas, Alejando; Alapizco-Castro, Gerardo; Irineo-Cabrales, Ana; Graue-Hernandez, Enrique; Ramirez-Luquin, Tito; Canizalez-Roman, Adrian
2017-01-01
AIM To assess the proportion of refractive errors in the Mexican population that visited primary care optometry clinics in fourteen states of Mexico. METHODS Refractive data from 676 856 patients aged 6 to 90y were collected from optometry clinics in fourteen states of Mexico between 2014 and 2015. The refractive errors were classified by the spherical equivalent (SE), as follows: sphere+½ cylinder. Myopia (SE>-0.50 D), hyperopia (SE>+0.50 D), emmetropia (-0.50≤SE≤+0.50), and astigmatism alone (cylinder≥-0.25 D). A negative cylinder was selected as a notation. RESULTS The proportion (95% confidence interval) among all of the subjects was hyperopia 21.0% (20.9-21.0), emmetropia 40.7% (40.5-40.8), myopia 24.8% (24.7-24.9) and astigmatism alone 13.5% (13.4-13.5). Myopia was the most common refractive error and frequency seemed to increase among the young population (10 to 29 years old), however, hyperopia increased among the aging population (40 to 79 years old), and astigmatism alone showed a decreasing trend with age (6 to 90y; from 19.7% to 10.8%). There was a relationship between age and all refractive errors (approximately 60%, aged 50 and older). The proportion of any clinically important refractive error was higher in males (61.2%) than in females (58.3%; P<0.0001). From fourteen states that collected information, the proportion of refractive error showed variability in different geographical areas of Mexico. CONCLUSION Myopia is the most common refractive error in the population studied. This study provides the first data on refractive error in Mexico. Further programs and studies must be developed to address the refractive errors needs of the Mexican population. PMID:28546940
ERIC Educational Resources Information Center
Van Duzer, Eric
2011-01-01
This report introduces a short, hands-on activity that addresses a key challenge in teaching quantitative methods to students who lack confidence or experience with statistical analysis. Used near the beginning of the course, this activity helps students develop an intuitive insight regarding a number of abstract concepts which are key to…
Holt-Winters Forecasting: A Study of Practical Applications for Healthcare Managers
2006-05-25
Winters Forecasting 5 List of Tables Table 1. Holt-Winters smoothing parameters and Mean Absolute Percentage Errors: Pseudoephedrine prescriptions Table 2...confidence intervals Holt-Winters Forecasting 6 List of Figures Figure 1. Line Plot of Pseudoephedrine Prescriptions forecast using smoothing parameters...The first represents monthly prescriptions of pseudoephedrine . Pseudoephedrine is a drug commonly prescribed to relieve nasal congestion and other
LIDAR forest inventory with single-tree, double- and single-phase procedures
Robert C. Parker; David L. Evans
2009-01-01
Light Detection and Ranging (LIDAR) data at 0.5- to 2-m postings were used with doublesample, stratified inventory procedures involving single-tree attribute relationships in mixed, natural, and planted species stands to yield sampling errors (one-half the confidence interval expressed as a percentage of the mean) ranging from ±2.1 percent to ±11.5...
ERIC Educational Resources Information Center
Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.
2011-01-01
Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files
John Shervais
2015-10-09
Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.
SPSS macros to compare any two fitted values from a regression model.
Weaver, Bruce; Dubois, Sacha
2012-12-01
In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng
2015-01-01
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.
Ringler, Adam; Holland, Austin; Wilson, David
2017-01-01
Variability in seismic instrumentation performance plays a fundamental role in our ability to carry out experiments in observational seismology. Many such experiments rely on the assumed performance of various seismic sensors as well as on methods to isolate the sensors from nonseismic noise sources. We look at the repeatability of estimating the self‐noise, midband sensitivity, and the relative orientation by comparing three collocated Nanometrics Trillium Compact sensors. To estimate the repeatability, we conduct a total of 15 trials in which one sensor is repeatedly reinstalled, alongside two undisturbed sensors. We find that we are able to estimate the midband sensitivity with an error of no greater than 0.04% with a 99th percentile confidence, assuming a standard normal distribution. We also find that we are able to estimate mean sensor self‐noise to within ±5.6 dB with a 99th percentile confidence in the 30–100‐s‐period band. Finally, we find our relative orientation errors have a mean difference in orientation of 0.0171° from the reference, but our trials have a standard deviation of 0.78°.
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
[A site index model for Larix principis-rupprechtii plantation in Saihanba, north China].
Wang, Dong-zhi; Zhang, Dong-yan; Jiang, Feng-ling; Bai, Ye; Zhang, Zhi-dong; Huang, Xuan-rui
2015-11-01
It is often difficult to estimate site indices for different types of plantation by using an ordinary site index model. The objective of this paper was to establish a site index model for plantations in varied site conditions, and assess the site qualities. In this study, a nonlinear mixed site index model was constructed based on data from the second class forest resources inventory and 173 temporary sample plots. The results showed that the main limiting factors for height growth of Larix principis-rupprechtii were elevation, slope, soil thickness and soil type. A linear regression model was constructed for the main constraining site factors and dominant tree height, with the coefficient of determination being 0.912, and the baseline age of Larix principis-rupprechtii determined as 20 years. The nonlinear mixed site index model parameters for the main site types were estimated (R2 > 0.85, the error between the predicted value and the actual value was in the range of -0.43 to 0.45, with an average root mean squared error (RMSE) in the range of 0.907 to 1.148). The estimation error between the predicted value and the actual value of dominant tree height for the main site types was in the confidence interval of [-0.95, 0.95]. The site quality of the high altitude-shady-sandy loam-medium soil layer was the highest and that of low altitude-sunny-sandy loam-medium soil layer was the lowest, while the other two sites were moderate.
BREAST: a novel method to improve the diagnostic efficacy of mammography
NASA Astrophysics Data System (ADS)
Brennan, P. C.; Tapia, K.; Ryan, J.; Lee, W.
2013-03-01
High quality breast imaging and accurate image assessment are critical to the early diagnoses, treatment and management of women with breast cancer. Breast Screen Reader Assessment Strategy (BREAST) provides a platform, accessible by researchers and clinicians world-wide, which will contain image data bases, algorithms to assess reader performance and on-line systems for image evaluation. The platform will contribute to the diagnostic efficacy of breast imaging in Australia and beyond on two fronts: reducing errors in mammography, and transforming our assessment of novel technologies and techniques. Mammography is the primary diagnostic tool for detecting breast cancer with over 800,000 women X-rayed each year in Australia, however, it fails to detect 30% of breast cancers with a number of missed cancers being visible on the image [1-6]. BREAST will monitor the mistakes, identify reasons for mammographic errors, and facilitate innovative solutions to reduce error rates. The BREAST platform has the potential to enable expert assessment of breast imaging innovations, anywhere in the world where experts or innovations are located. Currently, innovations are often being assessed by limited numbers of individuals who happen to be geographically located close to the innovation, resulting in equivocal studies with low statistical power. BREAST will transform this current paradigm by enabling large numbers of experts to assess any new method or technology using our embedded evaluation methods. We are confident that this world-first system will play an important part in the future efficacy of breast imaging.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Underresolved absorption spectroscopy of OH radicals in flames using broadband UV LEDs
NASA Astrophysics Data System (ADS)
White, Logan; Gamba, Mirko
2018-04-01
A broadband absorption spectroscopy diagnostic based on underresolution of the spectral absorption lines is evaluated for the inference of species mole fraction and temperature in combustion systems from spectral fitting. The approach uses spectrally broadband UV light emitting diodes and leverages low resolution, small form factor spectrometers. Through this combination, the method can be used to develop high precision measurement sensors. The challenges of underresolved spectroscopy are explored and addressed using spectral derivative fitting, which is found to generate measurements with high precision and accuracy. The diagnostic is demonstrated with experimental measurements of gas temperature and OH mole fraction in atmospheric air/methane premixed laminar flat flames. Measurements exhibit high precision, good agreement with 1-D flame simulations, and high repeatability. A newly developed model of uncertainty in underresolved spectroscopy is applied to estimate two-dimensional confidence regions for the measurements. The results of the uncertainty analysis indicate that the errors in the outputs of the spectral fitting procedure are correlated. The implications of the correlation between uncertainties for measurement interpretation are discussed.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Patient safety education to change medical students' attitudes and sense of responsibility.
Roh, Hyerin; Park, Seok Ju; Kim, Taekjoong
2015-01-01
This study examined changes in the perceptions and attitudes as well as the sense of individual and collective responsibility in medical students after they received patient safety education. A three-day patient safety curriculum was implemented for third-year medical students shortly before entering their clerkship. Before and after training, we administered a questionnaire, which was analysed quantitatively. Additionally, we asked students to answer questions about their expected behaviours in response to two case vignettes. Their answers were analysed qualitatively. There was improvement in students' concepts of patient safety after training. Before training, they showed good comprehension of the inevitability of error, but most students blamed individuals for errors and expressed a strong sense of individual responsibility. After training, students increasingly attributed errors to system dysfunction and reported more self-confidence in speaking up about colleagues' errors. However, due to the hierarchical culture, students still described difficulties communicating with senior doctors. Patient safety education effectively shifted students' attitudes towards systems-based thinking and increased their sense of collective responsibility. Strategies for improving superior-subordinate communication within a hierarchical culture should be added to the patient safety curriculum.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
2014-01-01
Background Uncorrected Refractive Error is one of the leading cause amblyopia that exposes children to poor school performance. It refrain them from productive working lives resulting in severe economic and social loses in their latter adulthood lives. The objective of the study was to assess the prevalence of uncorrected refractive error and its associated factors among school children in Debre Markos District. Method A cross section study design was employed. Four hundred thirty two students were randomly selected using a multistage stratified sampling technique. The data were collected by trained ophthalmic nurses through interview, structured questionnaires and physical examinations. Snellens visual acuity measurement chart was used to identify the visual acuity of students. Students with visual acuity less than 6/12 had undergone further examination using auto refractor and cross-checked using spherical and cylindrical lenses. The data were entered into epi data statistical software version 3.1 and analyzed by SPSS version 20. The statistical significance was set at α ≤ 0.05. Descriptive, bivariate and multivariate analyses were done using odds ratios with 95% confidence interval. Result Out of 432 students selected for the study, 420 (97.2%) were in the age group 7–15 years. The mean age was 12 ± 2.1SD. Overall prevalence of refractive error was 43 (10.2%). Myopia was found among the most dominant 5.47% followed by astigmatism 1.9% and hyperopia 1.4% in both sexes. Female sex (AOR: 3.96, 95% CI: 1.55-10.09), higher grade level (AOR: 4.82, 95% CI: 1.98-11.47) and using computers regularly (AOR: 4.53, 95% CI: 1.58-12.96) were significantly associated with refractive error. Conclusion The burden of uncorrected refractive errors is high among primary schools children. Myopia was common in both sexes. The potential risk factors were sex, regular use of computers and higher grade level of students. Hence, school health programs should work on health information dissemination and eye health care services provision. PMID:25070579
Mehta, Saurabh P; George, Hannah R; Goering, Christian A; Shafer, Danielle R; Koester, Alan; Novotny, Steven
2017-11-01
Clinical measurement study. The push-off test (POT) was recently conceived and found to be reliable and valid for assessing weight bearing through injured wrist or elbow. However, further research with larger sample can lend credence to the preliminary findings supporting the use of the POT. This study examined the interrater reliability, construct validity, and measurement error for the POT in patients with wrist conditions. Participants with musculoskeletal (MSK) wrist conditions were recruited. The performance on the POT, grip isometric strength of wrist extensors was assessed. The shortened version of the Disabilities of the Arm, Shoulder and Hand and numeric pain rating scale were completed. The intraclass correlation coefficient assessed interrater reliability of the POT. Pearson correlation coefficients (r) examined the concurrent relationships between the POT and other measures. The standard error of measurement and the minimal detectable change at 90% confidence interval were assessed as measurement error and index of true change for the POT. A total of 50 participants with different elbow or wrist conditions (age: 48.1 ± 16.6 years) were included in this study. The results of this study strongly supported the interrater reliability (intraclass correlation coefficient: 0.96 and 0.93 for the affected and unaffected sides, respectively) of the POT in patients with wrist MSK conditions. The POT showed convergent relationships with the grip strength on the injured side (r = 0.89) and the wrist extensor strength (r = 0.7). The POT showed smaller standard error of measurement (1.9 kg). The minimal detectable change at 90% confidence interval for the POT was 4.4 kg for the sample. This study provides additional evidence to support the reliability and validity of the POT. This is the first study that provides the values for the measurement error and true change on the POT scores in patients with wrist MSK conditions. Further research should examine the responsiveness and discriminant validity of the POT in patients with wrist conditions. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Giltrap, Donna L; Ausseil, Anne-Gaëlle E
2016-01-01
The availability of detailed input data frequently limits the application of process-based models at large scale. In this study, we produced simplified meta-models of the simulated nitrous oxide (N2O) emission factors (EF) using NZ-DNDC. Monte Carlo simulations were performed and the results investigated using multiple regression analysis to produce simplified meta-models of EF. These meta-models were then used to estimate direct N2O emissions from grazed pastures in New Zealand. New Zealand EF maps were generated using the meta-models with data from national scale soil maps. Direct emissions of N2O from grazed pasture were calculated by multiplying the EF map with a nitrogen (N) input map. Three meta-models were considered. Model 1 included only the soil organic carbon in the top 30cm (SOC30), Model 2 also included a clay content factor, and Model 3 added the interaction between SOC30 and clay. The median annual national direct N2O emissions from grazed pastures estimated using each model (assuming model errors were purely random) were: 9.6GgN (Model 1), 13.6GgN (Model 2), and 11.9GgN (Model 3). These values corresponded to an average EF of 0.53%, 0.75% and 0.63% respectively, while the corresponding average EF using New Zealand national inventory values was 0.67%. If the model error can be assumed to be independent for each pixel then the 95% confidence interval for the N2O emissions was of the order of ±0.4-0.7%, which is much lower than existing methods. However, spatial correlations in the model errors could invalidate this assumption. Under the extreme assumption that the model error for each pixel was identical the 95% confidence interval was approximately ±100-200%. Therefore further work is needed to assess the degree of spatial correlation in the model errors. Copyright © 2015 Elsevier B.V. All rights reserved.
Female residents experiencing medical errors in general internal medicine: a qualitative study.
Mankaka, Cindy Ottiger; Waeber, Gérard; Gachoud, David
2014-07-10
Doctors, especially doctors-in-training such as residents, make errors. They have to face the consequences even though today's approach to errors emphasizes systemic factors. Doctors' individual characteristics play a role in how medical errors are experienced and dealt with. The role of gender has previously been examined in a few quantitative studies that have yielded conflicting results. In the present study, we sought to qualitatively explore the experience of female residents with respect to medical errors. In particular, we explored the coping mechanisms displayed after an error. This study took place in the internal medicine department of a Swiss university hospital. Within a phenomenological framework, semi-structured interviews were conducted with eight female residents in general internal medicine. All interviews were audiotaped, fully transcribed, and thereafter analyzed. Seven main themes emerged from the interviews: (1) A perception that there is an insufficient culture of safety and error; (2) The perceived main causes of errors, which included fatigue, work overload, inadequate level of competences in relation to assigned tasks, and dysfunctional communication; (3) Negative feelings in response to errors, which included different forms of psychological distress; (4) Variable attitudes of the hierarchy toward residents involved in an error; (5) Talking about the error, as the core coping mechanism; (6) Defensive and constructive attitudes toward one's own errors; and (7) Gender-specific experiences in relation to errors. Such experiences consisted in (a) perceptions that male residents were more confident and therefore less affected by errors than their female counterparts and (b) perceptions that sexist attitudes among male supervisors can occur and worsen an already painful experience. This study offers an in-depth account of how female residents specifically experience and cope with medical errors. Our interviews with female residents convey the sense that gender possibly influences the experience with errors, including the kind of coping mechanisms displayed. However, we acknowledge that the lack of a direct comparison between female and male participants represents a limitation while aiming to explore the role of gender.
NASA Astrophysics Data System (ADS)
Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom
2016-04-01
Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA optimization.
Moritz, Steffen; Voigt, Miriam; Köther, Ulf; Leighton, Lucy; Kjahili, Besiane; Babur, Zehra; Jungclaussen, David; Veckenstedt, Ruth; Grzella, Karsten
2014-06-01
There is emerging evidence that the induction of doubt can reduce positive symptoms in patients with schizophrenia. Based on prior investigations indicating that brief psychological interventions may attenuate core aspects of delusions, we set up a proof of concept study using a virtual reality experiment. We explored whether feedback for false judgments positively influences delusion severity. A total of 33 patients with schizophrenia participated in the experiment. Following a short practice trial, patients were instructed to navigate through a virtual street on two occasions (noise versus no noise), where they met six different pedestrians in each condition. Subsequently, patients were asked to recollect the pedestrians and their corresponding facial affect in a recognition task graded for confidence. Before and after the experiment, the Paranoia Checklist (frequency subscale) was administered. The Paranoia Checklist score declined significantly from pre to post at a medium effect size. We split the sample into those with some improvement versus those that either showed no improvement, or worsened. Improvement was associated with lower confidence ratings (both during the experiment, particularly for incorrect responses, and according to retrospect assessment). No control condition, unclear if improvement is sustained. The study tentatively suggests that a brief virtual reality experiment involving error feedback may ameliorate delusional ideas. Randomized controlled trials and dismantling studies are now needed to substantiate the findings and to pinpoint the underlying therapeutic mechanisms, for example error feedback or fostering attenuation of confidence judgments in the face of incomplete evidence. Copyright © 2013 Elsevier Ltd. All rights reserved.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio
2016-01-01
Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Complacency and Automation Bias in the Use of Imperfect Automation.
Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L
2015-08-01
We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César
2011-01-01
OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Quantifying confidence in density functional theory predictions of magnetic ground states
NASA Astrophysics Data System (ADS)
Houchins, Gregory; Viswanathan, Venkatasubramanian
2017-10-01
Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there needs to be a systematic test of a collection of plausible magnetic states, especially in identifying antiferromagnetic (AFM) ground states. We believe that our approach of estimating uncertainty can be readily incorporated into all high-throughput computational material discovery efforts and this will lead to a dramatic increase in the likelihood of finding good candidate materials.
Characterizing and estimating noise in InSAR and InSAR time series with MODIS
Barnhart, William D.; Lohman, Rowena B.
2013-01-01
InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.
Liquid medication dosing errors in children: role of provider counseling strategies.
Yin, H Shonna; Dreyer, Benard P; Moreira, Hannah A; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L
2014-01-01
To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in 2 urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child's medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. The primary dependent variable was observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses were performed, controlling for parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; and site. Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, P = .01; 21.8 vs. 45.7%, P = .001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (adjusted odds ratio 0.3; 95% confidence interval 0.1-0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang
2016-01-01
Background To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. Methods A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Results Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was −2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of −0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of −0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%–63.5%), with an annual incidence of 10.6% (95% CI, 8.7%–13.1%). Myopia was found more likely to happen in female and older children. Conclusions In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China. PMID:26875599
Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang
2016-07-05
To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was -2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of -0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of -0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%-63.5%), with an annual incidence of 10.6% (95% CI, 8.7%-13.1%). Myopia was found more likely to happen in female and older children. In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China.
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...
2017-01-07
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Zheng, L.; Kreemer, C.
2014-12-01
The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.
NASA Technical Reports Server (NTRS)
Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.
1989-01-01
A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.
Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results
Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.
2016-01-01
Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648
Asteroid thermal modeling in the presence of reflected sunlight
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2018-03-01
A new derivation of simple asteroid thermal models is presented, investigating the need to account correctly for Kirchhoff's law of thermal radiation when IR observations contain substantial reflected sunlight. The framework applies to both the NEATM and related thermal models. A new parameterization of these models eliminates the dependence of thermal modeling on visible absolute magnitude H, which is not always available. Monte Carlo simulations are used to assess the potential impact of violating Kirchhoff's law on estimates of physical parameters such as diameter and IR albedo, with an emphasis on NEOWISE results. The NEOWISE papers use ten different models, applied to 12 different combinations of WISE data bands, in 47 different combinations. The most prevalent combinations are simulated and the accuracy of diameter estimates is found to be depend critically on the model and data band combination. In the best case of full thermal modeling of all four band the errors in an idealized model the 1σ (68.27%) confidence interval is -5% to +6%, but this combination is just 1.9% of NEOWISE results. Other combinations representing 42% of the NEOWISE results have about twice the CI at -10% to +12%, before accounting for errors due to irregular shape or other real world effects that are not simulated. The model and data band combinations found for the majority of NEOWISE results have much larger systematic and random errors. Kirchhoff's law violation by NEOWISE models leads to errors in estimation accuracy that are strongest for asteroids with W1, W2 band emissivity ɛ12 in both the lowest (0.605 ≤ɛ12 ≤ 0 . 780), and highest decile (0.969 ≤ɛ12 ≤ 0 . 988), corresponding to the highest and lowest deciles of near-IR albedo pIR. Systematic accuracy error between deciles ranges from a low of 5% to as much as 45%, and there are also differences in the random errors. Kirchhoff's law effects also produce large errors in NEOWISE estimates of pIR, particularly for high values. IR observations of asteroids in bands that have substantial reflected sunlight can largely avoid these problems by adopting the Kirchhoff law compliant modeling framework presented here, which is conceptually straightforward and comes without computational cost.
Songs My Student Taught Me: Narrative of an Early Childhood Cello Teacher
ERIC Educational Resources Information Center
Hendricks, Karin S.
2013-01-01
Out of the mouth of babes (and even more nonverbal) has come perhaps the wisest music teacher education I have ever received. In this narrative I share my foibles as a young, over-confident, and naive music instructor who, through a great amount of error, eventually learned the value of letting a child lead his own music learning. Throughout this…
Statistical Properties of SEE Rate Calculation in the Limits of Large and Small Event Counts
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2007-01-01
This viewgraph presentation reviews the Statistical properties of Single Event Effects (SEE) rate calculations. The goal of SEE rate calculation is to bound the SEE rate, though the question is by how much. The presentation covers: (1) Understanding errors on SEE cross sections, (2) Methodology: Maximum Likelihood and confidence Contours, (3) Tests with Simulated data and (4) Applications.
A comparative study of clock rate and drift estimation
NASA Technical Reports Server (NTRS)
Breakiron, Lee A.
1994-01-01
Five different methods of drift determination and four different methods of rate determination were compared using months of hourly phase and frequency data from a sample of cesium clocks and active hydrogen masers. Linear least squares on frequency is selected as the optimal method of determining both drift and rate, more on the basis of parameter parsimony and confidence measures than on random and systematic errors.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Vellone, Ercole; Fida, Roberta; D'Agostino, Fabio; Mottola, Antonella; Juarez-Vela, Raul; Alvaro, Rosaria; Riegel, Barbara
2015-11-01
Self-care, a key element of heart failure care, is challenging for patients with impaired cognition. Mechanisms through which cognitive impairment affects self-care are not currently well defined but evidence from other patient populations suggests that self-efficacy, or task-specific confidence, mediates the relationship between cognitive functioning and patient behaviors such as self-care. The aim of this study was to test the mediating role of self-care confidence in the relationship between cognition and self-care behaviors. A secondary analysis of data from a cross-sectional study. Outpatient heart failure clinics in 28 Italian provinces. 628 Italian heart failure patients. We used the Self-Care of Heart Failure Index v.6.2 to measure self-care maintenance, self-care management, and self-care confidence. Cognition was assessed with the Mini Mental State Examination. Structural equation modeling was used to analyze the data. Participants were 73 years old on average (SD=11), mostly (58%) male and mostly (77%) in New York Heart Association functional classes II and III. The mediation model showed excellent fit (comparative fit index=1.0; root mean square error of approximation=0.02): Self-care confidence totally mediated the relationship between cognition and self-care maintenance and management. Cognition affects self-care behaviors indirectly, through self-care confidence. Interventions aimed at improving self-care confidence may improve self-care, even in heart failure patients with impaired cognition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Impact of confidence number on accuracy of the SureSight Vision Screener.
2010-02-01
To assess the relation between the confidence number provided by the Welch Allyn SureSight Vision Screener and screening accuracy, and to determine whether repeated testing to achieve a higher confidence number improves screening accuracy in pre-school children. Lay and nurse screeners screened 1452 children enrolled in the Vision in Preschoolers (VIP) Phase II Study. All children also underwent a comprehensive eye examination. By using statistical comparison of proportions, we examined sensitivity and specificity for detecting any ocular condition targeted for detection in the VIP study and conditions grouped by severity and by type (amblyopia, strabismus, significant refractive error, and unexplained decreased visual acuity) among children who had confidence numbers < or =4 (retest necessary), 5 (retest if possible), > or =6 (acceptable). Among the 687 (47.3%) children who had repeated testing by either lay or nurse screeners because of a low confidence number (<6) for one or both eyes in the initial testing, the same analyses were also conducted to compare results between the initial reading and repeated test reading with the highest confidence number in the same child. These analyses were based on the failure criteria associated with 90% specificity for detecting any VIP condition in VIP Phase II. A lower confidence number category were associated with higher sensitivity (0.71, 0.65, and 0.59 for < or =4, 5, and > or =6, respectively, p = 0.04) but no statistical difference in specificity (0.85, 0.85, and 0.91, p = 0.07) of detecting any VIP-targeted condition. Children with any VIP-targeted condition were as likely to be detected using the initial confidence number reading as using the higher confidence number reading from repeated testing. A higher confidence number obtained during screening with the SureSight Vision Screener is not associated with better screening accuracy. Repeated testing to reach the manufacturer's recommended minimum value is not helpful in pre-school vision screening.
Validation of high-resolution MAIAC aerosol product over South America
NASA Astrophysics Data System (ADS)
Martins, V. S.; Lyapustin, A.; de Carvalho, L. A. S.; Barbosa, C. C. F.; Novo, E. M. L. M.
2017-07-01
Multiangle Implementation of Atmospheric Correction (MAIAC) is a new Moderate Resolution Imaging Spectroradiometer (MODIS) algorithm that combines time series approach and image processing to derive surface reflectance and atmosphere products, such as aerosol optical depth (AOD) and columnar water vapor (CWV). The quality assessment of MAIAC AOD at 1 km resolution is still lacking across South America. In the present study, critical assessment of MAIAC AOD550 was performed using ground-truth data from 19 Aerosol Robotic Network (AERONET) sites over South America. Additionally, we validated the MAIAC CWV retrievals using the same AERONET sites. In general, MAIAC AOD Terra/Aqua retrievals show high agreement with ground-based measurements, with a correlation coefficient (R) close to unity (RTerra:0.956 and RAqua: 0.949). MAIAC accuracy depends on the surface properties and comparisons revealed high confidence retrievals over cropland, forest, savanna, and grassland covers, where more than 2/3 ( 66%) of retrievals are within the expected error (EE = ±(0.05 + 0.05 × AOD)) and R exceeding 0.86. However, AOD retrievals over bright surfaces show lower correlation than those over vegetated areas. Both MAIAC Terra and Aqua retrievals are similarly comparable to AERONET AOD over the MODIS lifetime (small bias offset 0.006). Additionally, MAIAC CWV presents quantitative information with R 0.97 and more than 70% of retrievals within error (±15%). Nonetheless, the time series validation shows an upward bias trend in CWV Terra retrievals and systematic negative bias for CWV Aqua. These results contribute to a comprehensive evaluation of MAIAC AOD retrievals as a new atmospheric product for future aerosol studies over South America.
Bania, Theofani
2014-09-01
We determined the criterion validity and the retest reliability of the ΑctivPAL™ monitor in young people with diplegic cerebral palsy (CP). Activity monitor data were compared with the criterion of video recording for 10 participants. For the retest reliability, activity monitor data were collected from 24 participants on two occasions. Participants had to have diplegic CP and be between 14 and 22 years of age. They also had to be of Gross Motor Function Classification System level II or III. Outcomes were time spent in standing, number of steps (physical activity) and time spent in sitting (sedentary behaviour). For criterion validity, coefficients of determination were all high (r(2) ≥ 0.96), and limits of group agreement were relatively narrow, but limits of agreement for individuals were narrow only for number of steps (≥5.5%). Relative reliability was high for number of steps (intraclass correlation coefficient = 0.87) and moderate for time spent in sitting and lying, and time spent in standing (intraclass correlation coefficients = 0.60-0.66). For groups, changes of up to 7% could be due to measurement error with 95% confidence, but for individuals, changes as high as 68% could be due to measurement error. The results support the criterion validity and the retest reliability of the ActivPAL™ to measure physical activity and sedentary behaviour in groups of young people with diplegic CP but not in individuals. Copyright © 2014 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Ujjaini; Lasue, Jeremie, E-mail: ujjaini.alam@gmail.com, E-mail: jeremie.lasue@irap.omp.eu
We examine three SNe Type Ia datasets: Union2.1, JLA and Panstarrs to check their consistency using cosmology blind statistical analyses as well as cosmological parameter fitting. We find that the Panstarrs dataset is the most stable of the three to changes in the data, although it does not, at the moment, go to high enough redshifts to tightly constrain the equation of state of dark energy, w . The Union2.1, drawn from several different sources, appears to be somewhat susceptible to changes within the dataset. The JLA reconstructs well for a smaller number of cosmological parameters. At higher degrees ofmore » freedom, the dependence of its errors on redshift can lead to varying results between subsets. Panstarrs is inconsistent with the other two datasets at about 2σ confidence level, and JLA and Union2.1 are about 1σ away from each other. For the Ω{sub 0} {sub m} − w cosmological reconstruction, with no additional data, the 1σ range of values in w for selected subsets of each dataset is two times larger for JLA and Union2.1 as compared to Panstarrs. The range in Ω{sub 0} {sub m} for the same subsets remains approximately similar for all three datasets. We find that although there are differences in the fitting and correction techniques used in the different samples, the most important criterion is the selection of the SNe, a slightly different SNe selection can lead to noticeably different results both in the purely statistical analysis and in cosmological reconstruction. We note that a single, high quality low redshift sample could help decrease the uncertainties in the result. We also note that lack of homogeneity in the magnitude errors may bias the results and should either be modeled, or its effect neutralized by using other, complementary datasets. A supernova sample with high quality data at both high and low redshifts, constructed from a few surveys to avoid heterogeneity in the sample, and with homogeneous errors, would result in a more robust cosmological reconstruction.« less
Zhang, Zhiyong; Yuan, Ke-Hai
2016-06-01
Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.
Zhang, Zhiyong; Yuan, Ke-Hai
2015-01-01
Cronbach’s coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald’s omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega. PMID:29795870
Refractive Error and the Risk of Age-Related Macular Degeneration in the South Korean Population.
Lin, Shuai-Chun; Singh, Kuldev; Chao, Daniel L; Lin, Shan C
2016-01-01
We investigated the association between refractive error and the prevalence of age-related macular degeneration (AMD) in a population-based study. This was a cross-sectional study. Right eyes were included from 14,067 participants aged 40 years and older with gradable fundus photographs and refraction data from the fourth and the fifth Korea National Health and Nutrition Examination Survey 2008 to 2011. Early and late AMD was graded based on the International Age-Related Maculopathy Epidemiological Study Group grading system. Autorefraction data were collected to calculate spherical equivalent refraction in diopters (D) and classified into 4 groups: hyperopia (≥1.0 D), emmetropia (-0.99 to 0.99 D), mild myopia (-1.0 to -2.99 D), and moderate to high myopia (≤-3.0 D). After adjustment for potential confounders, each diopter increase in spherical equivalent was associated with a 16% [odds ratio (OR), 1.16; 95% confidence interval (CI), 1.08-1.25] and 18% (OR, 1.18; 95% CI, 1.10-1.27) increased risk of any (early + late) and early AMD, respectively. Mild and moderate to high myopia were associated with lower odds of any and early AMD compared with hyperopia (any AMD: OR, 0.62; 95% CI, 0.4-0.95 for mild myopia; OR, 0.41; 95% CI, 0.21-0.81 for moderate to high myopia; early AMD: OR, 0.63; 95% CI, 0.4-0.99 for mild myopia; OR, 0.36; 95% CI, 0.16-0.77 for moderate to high myopia group). There was no association between refractive status and the likelihood of late AMD (P = 0.91). Myopia is associated with lower odds of any and early AMD, but not with late AMD in the South Korean population.
Challenges of Big Data Analysis.
Fan, Jianqing; Han, Fang; Liu, Han
2014-06-01
Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.
Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics
NASA Technical Reports Server (NTRS)
Rohrs, C. E.; Valavani, L.; Athans, M.; Stein, G.
1985-01-01
This paper examines the robustness properties of existing adaptive control algorithms to unmodeled plant high-frequency dynamics and unmeasurable output disturbances. It is demonstrated that there exist two infinite-gain operators in the nonlinear dynamic system which determines the time-evolution of output and parameter errors. The pragmatic implications of the existence of such infinite-gain operators is that: (1) sinusoidal reference inputs at specific frequencies and/or (2) sinusoidal output disturbances at any frequency (including dc), can cause the loop gain to increase without bound, thereby exciting the unmodeled high-frequency dynamics, and yielding an unstable control system. Hence, it is concluded that existing adaptive control algorithms as they are presented in the literature referenced in this paper, cannot be used with confidence in practical designs where the plant contains unmodeled dynamics because instability is likely to result. Further understanding is required to ascertain how the currently implemented adaptive systems differ from the theoretical systems studied here and how further theoretical development can improve the robustness of adaptive controllers.
NASA Astrophysics Data System (ADS)
Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.
2008-05-01
Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.
Challenges of Big Data Analysis
Fan, Jianqing; Han, Fang; Liu, Han
2014-01-01
Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions
NASA Astrophysics Data System (ADS)
Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.
2017-10-01
We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
Dynamic gas temperature measurement system
NASA Technical Reports Server (NTRS)
Elmore, D. L.; Robinson, W. W.; Watkins, W. B.
1983-01-01
A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.
Howell, Erik H; Senapati, Alpana; Hsich, Eileen; Gorodeski, Eiran Z
2017-01-01
Cognitive impairment is highly prevalent among older adults (aged ≥65 years) hospitalized for heart failure and has been associated with poor outcomes. Poor medication self-management skills have been associated with poor outcomes in this population as well. The presence and extent of an association between cognitive impairment and poor medication self-management skills in this population has not been clearly defined. We assessed the cognition of consecutive older adults hospitalized for heart failure, in relation to their medication self-management skills. We conducted a cross-sectional study of older adults (aged ≥65 years) who were hospitalized for heart failure and were being discharged home. Prior to discharge, we assessed cognition using the Mini-Cog. We also tested patients' ability to read a pill bottle label, open a pill bottle safety cap, and allocate mock pills to a pill box. Pill allocation performance was assessed quantitatively (counts of errors of omission and commission) and qualitatively (patterns suggestive of knowledge-based mistakes, rule-based mistakes, or skill-based slips). Of 55 participants, 22% were found to have cognitive impairment. Patients with cognitive impairment tended to be older as compared to those without cognitive impairment (mean age = 81 vs 76 years, p = NS). Patients with cognitive impairment had a higher prevalence of inability to read pill bottle label (prevalence ratio = 5.8, 95% confidence interval = 3.2-10.5, p = 0.001) and inability to open pill bottle safety cap (prevalence ratio = 3.3, 95% confidence interval = 1.3-8.4, p = 0.03). While most patients (65%) had pill-allocation errors regardless of cognition, those patients with cognitive impairment tended to have more errors of omission (mean number of errors = 48 vs 23, p = 0.006), as well as more knowledge-based mistakes (75% vs 40%, p = 0.03). There is an association between cognitive impairment and poor medication self-management skills. Medication taking failures due to poor medication self-management skills may be part of the pathway linking cognitive impairment to poor post-discharge outcomes among patients with heart failure transitioning from hospital to home.
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
Douglass, Amy Bradfield; Smith, Caroline; Fraser-Thill, Rebecca
2005-10-01
In Experiment 1, photospread administrators (PAs, N = 50) showed a target-absent photospread to a confederate eyewitness (CW), who was randomly assigned to identify one photo with either high or low confidence. PAs subsequently administered the same target-absent photospread to participant eyewitnesses (PWs, N = 50), all of whom had viewed a live staged crime 1 week earlier. CWs were rated by the PAs as significantly more confident in the high-confidence condition versus low-confidence condition. More importantly, the confidence of the CW affected the identification decision of the PW. In the low-confidence condition, the photo identified by the CW was identified by the PW significantly more than the other photos; there was no significant difference in photo choice in the high-confidence condition. In spite of the obvious influence exerted in the low-confidence condition, observers were not able to detect bias in the photospread procedures. A second experiment was conducted to test a post-hoc explanation for the results of Experiment 1: PAs exerted influence in the low-confidence condition because they perceived the task as more difficult for the eyewitness than in the high-confidence condition. Independent observers (N = 84) rated the difficulty of the confederate's task as higher in the low-confidence condition compared with the high-confidence condition, suggesting that expectations of task difficulty might be driving the effect observed in Experiment 1. Results support recommendations for double-blind photospreads and emphasize that the same investigator should not administer photo lineups to multiple eyewitnesses in an investigation.
Hamilton, S J
2017-05-22
Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.
Ganz, Jennifer B; Morin, Kristi L; Foster, Margaret J; Vannest, Kimberly J; Genç Tosun, Derya; Gregori, Emily V; Gerow, Stephanie L
2017-12-01
The use of mobile technology is ubiquitous in modern society and is rapidly increasing in novel use. The use of mobile devices and software applications ("apps") as augmentative and alternative communication (AAC) is rapidly expanding in the community, and this is also reflected in the research literature. This article reports the social-communication outcome results of a meta-analysis of single-case experimental research on the use of high-tech AAC, including mobile devices, by individuals with intellectual and developmental disabilities, including autism spectrum disorder. Following inclusion determination, and excluding studies with poor design quality, raw data from 24 publications were extracted and included 89 A-B phase contrasts. Tau-U nonparametric, non-overlap effect size was used to aggregate the results across all studies for an omnibus and moderator analyses. Kendall's S was calculated for confidence intervals, p-values, and standard error. The omnibus analysis indicated overall low to moderate positive effects on social-communication outcomes for high-tech AAC use by individuals with intellectual and developmental disabilities.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.